Git Product home page Git Product logo

python_interview's Introduction

Aquatic's Python Programming Interview

If you have an on-site python interview at Aquatic, you'll use this prepared workspace to solve the programming problem described below. We're sharing it with everyone to ensure a level playing field when it comes to knowledge about technical interviews in the finance industry.

During your interview, you'll work along side another Aquatic software engineer to write the code and come up with a complete working solution in 2 hours. You'll be able to look things up on the Internet, just as you would on any regular working day. You'll use a Linux workstation (like the ones we use) to write the code with an editor of your choice. Along the way, the requirements may change, so be prepared to make adjustments accordingly. We'll try to make this experience as realistic as possible, so you can get a sense of what working at Aquatic is really like.

Here are some of the criteria by which you'll be evaluated:

  • Can you create an effective solution in the time given? Does it actually run?
  • Can you adapt your solution to new requirements?
  • Can you explain what you're doing and why?
  • Can you maintain good project hygiene, treating this like "real world" software.

If you prefer, you can solve this problem on your own time, using your own computer (Linux recommended). Then, once you arrive at Aquatic for your interview, you'll spend some time working with another engineer to adapt your solution to new requirements. Ask your contact at Aquatic how to submit your solution before your interview.

The Problem

Included in this repository is a data set taken from the City of Chicago's Open Data portal. It has weather data from Chicago beaches in CSV format.

You'll need to write a command line tool in Python that turns the (roughly) one hour temperature samples into daily aggregates, with the start, end, high, and low of all the values of the Air Temperature for each day, at each weather station. For example, assuming the temperature values on a particular day at a particular station were:

Foster Weather Station,01/01/2016 11:00:00 PM,69
Foster Weather Station,01/01/2016 08:00:00 PM,70
Foster Weather Station,01/01/2016 07:00:00 PM,70
Foster Weather Station,01/01/2016 06:00:00 PM,72
Foster Weather Station,01/01/2016 05:00:00 PM,72
Foster Weather Station,01/01/2016 04:00:00 PM,73
Foster Weather Station,01/01/2016 03:00:00 PM,69
Foster Weather Station,01/01/2016 02:00:00 PM,70
Foster Weather Station,01/01/2016 01:00:00 PM,70
Foster Weather Station,01/01/2016 12:00:00 PM,70
Foster Weather Station,01/01/2016 11:00:00 AM,70
Foster Weather Station,01/01/2016 10:00:00 AM,70
Foster Weather Station,01/01/2016 09:00:00 AM,70
Foster Weather Station,01/01/2016 08:00:00 AM,71
Foster Weather Station,01/01/2016 07:00:00 AM,72
Foster Weather Station,01/01/2016 06:00:00 AM,72
Foster Weather Station,01/01/2016 05:00:00 AM,71
Foster Weather Station,01/01/2016 04:00:00 AM,69
Foster Weather Station,01/01/2016 03:00:00 AM,67
Foster Weather Station,01/01/2016 02:00:00 AM,64
Foster Weather Station,01/01/2016 01:00:00 AM,67
Foster Weather Station,01/01/2016 12:00:00 AM,67

Then the expected values for start, end, high, and low for this day would be:

  • start: 67
  • end: 69
  • high: 73
  • low: 64

The program should read the data from STDIN and output the aggregated data to STDOUT in CSV format. The exact details of the output format are up to you.

Problem Environment

This repository has a Makefile, prepared for a Linux environment, with various targets for running tests and executing the program. If you're not familiar with make, don't worry. You'll just need to use a few commands, all of which will be shown if you run the make command in the root directory of the repository.

aquanauts/interview$ make
repl                           Run an iPython REPL
run                            Run the program on the provided dataset
test                           Run tests
watch                          Run unit tests continuously

For example, to run the tests, you run make test. The watch target will run the tests automatically whenever you change a .py file. Any of these targets will automatically install all the necessary dependencies (including miniconda3) to the repository directory.

You are encouraged take a few minutes to clone this repository and experiment with this environment before your interview, so that it is familiar to you when you arrive. Feel free to ask any questions if you run into problems.

python_interview's People

Contributors

benrady-aq avatar jordansamuels-aq avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.