How to Optimize Your Résumé for StartupsBy Chris Muir
How do you write or rewrite your résumé if you want a job at a startup? Here are some tips to make your CV stand out to startup founders and increase your chances of getting the job.
This is the latest in our “Inside the Stack" series featuring Underdog.io customers. This week, we hear from Daniel Schwartz, Senior Director of Engineering at Harry’s. The Harry’s team has made one hire through the Underdog.io platform; they are planning to hire many more across a number of departments.
Harry’s was started with one simple belief: everyone deserves a great shave at a fair price. We strive to make the shaving experience better for our customers every day, from purchase to product use. From a technical standpoint we have an interesting team setup in that we have two distinct technical teams, each responsible for separate areas.
The Platform Engineering team is responsible for the day-to-day operations of our e-commerce systems (including the website), our API, and most interestingly, our smart shipping/inventory/supply chain/distribution engine that powers all our after-purchase decision engines.
The Data Engineering team is responsible for building distributed systems that reliably ingest, store, and index data from a broad set of sources for intelligent applications and analysis. The data we collect not only drives decisions across Harry’s, but powers our in-house email platform that we use for lifecycle marketing, personalization, and experimentation.
Rails is our main web framework for the publicly accessible website and API. It may not be the sexiest choice, but it’s been battle-tested, and we’ve been able to tweak it so it’s highly performant when we need it to be. Coupled with smart caching and background jobs it functions well, however we’re always exploring ways to optimize our stack.
Postgres: Like I said about Rails above, it’s been battle-tested, and the last thing we want is surprises when it comes to our main relational data store. It’s also fantastic at fast replication which helps our two teams stream data to one another and know it got committed to the remote database.
Redis: Like Memcache, only way better. This runs a bunch of different parts of our infrastructure including: Caching, job queues, inventory, and a/b testing. We love that it can be backed up to disk (in case of a catastrophic failure), can be clustered, and that separate clusters can talk to each other with built-in replication.
Spark: This is the newest addition to our data arsenal, and it’s pretty amazing. It allows the Data Engineering team to run queries and build new views of our data at lightning fast speeds. It runs in memory, allows us to write in Scala (which is type safe – fewer bugs!), and has become central to our data pipelines and applications.
Heroku: This is probably the biggest external DevOps tool our two teams use. It allows us to focus on building rather than managing deployments and servers, which is a big deal. Since it handles the actual deployment procedure, it frees up time for us to prioritize building really interesting internal DevOps tools.
AWS: Not quite a DevOps tool, but as the Data Engineering team has incorporated data infrastructure that falls outside Heroku’s model or hosted databases, we’ve begun relying on AWS for hosting & provisioning. Our deployment model here is a bit of a blank slate today, so we’re looking for engineers with DevOps experience as well!
Dynosaur: Our autoscaler for web dynos and Heroku addons that allows us to get an influx of traffic and never worry about us scaling our web fleet or the addons that power it.
Kontrast: Our automated front end testing tool that lets us know of breaking CSS changes before a deploy.
Cronut: Our dead mans switch for our cron jobs that alerts us to jobs that fail instead of looking out for success emails.
Deployment Runways: Using Hubot to automate deployments to staging and production environments as well as automating smoke tests on staging. This helps avoid costly bugs and rollback scenarios on production that we can catch using manual testing on staging.
I’m most excited by the Amazon-esque problems we’re going to get to solve both on the Platform and Data side of the world.
On the Platform side, we’re just scratching the surface of all the interesting distribution and shipping problems we’re going to solve. Answering questions like: How do we get our customers our products in the most efficient, cost effective, fastest way possible? How do we optimize our shipping algorithms to take into account new carriers/shipping methods/distribution centers? And how do we accomplish all of the former quickly without impacting platform performance?
On the Data side, we’re excited about scaling out our data platform to accommodate more voluminous, unstructured data as well as the breadth of problems we’re able to address with all of it. Some of the core business problems we’re looking forward to tackling later this year include predictive analyses to enable more efficient customer acquisition & supply chain forecasting.
Thanks to Daniel and the whole Harry’s team. Visit harrys.com to read more about the company and to view their product line. Companies: email [email protected] if you’d like to be featured in this series.
Every week we send out a newsletter called Ruff Notes with our personal thoughts on something interesting we’ve read, as well as product updates and news from our community.