Are you having enough fun with your 64-bit system? Weren’t things better back in 8 and 16-bit era? The next HackKRK is coming and this time it’s gaming
Pushing to production many times a day
Our build process produces single “fat jar” and we’ve set up a fully automated one-click deployment process which means much a quicker feedback loop – from commit to production.
BACKEND PRODUCTION DEPLOYMENTS/DAY
Every service is automatically monitored in real-time to ensure a great experience 24/7 all over the world.
Scalability with Lightweight Stack
We weren’t the first to figure out that in this day and age we no longer need heavy application servers. But we have seen this shift coming and consequently made correct strategic technology decisions early on. Our architecture is structured towards microservices which allow for better scalability, isolation and extensibility.
The JVM is particularly well-suited for building REST services that can stand up to the massive volume of traffic. As the scale of our business rapidly grows, our services need to be reliable, high-performance and fault-tolerant – we’ve chosen Netflix’s RxJava and Hystrix to help us achieve these goals.
Challenges we face every day
Here are a few projects that demonstrate the level and complexity of technical challenges what we work on every day:
With the use of MySQL binlogs and DynamoDB we’ve built Base Firehose – a highly available, asynchronous data pipeline of all events occurring in Base. Exposed via REST API and Kafka it provides both ease of use and highly scalable source of data. It also allows for fast indexing of internal read models as well as robust integration with our customers.
Base Snap Platform allows third party developers to deploy Java microservices inside Docker containers with ease handling all the operational tasks like configuration, monitoring, security, continuous integration or logging. Applications deployed in Platform follow The Twelve-Factor App principles. Platform also allows to develop applications locally in an environment that closely reflects production.
How would you know for sure that “Mr John Wayne, john+home (at) acme.com” and “Wayne-John, firstname.lastname@example.org” are one and the same person? Our own internal deduplication service automatically merges similar entities, eliminates data clutter effectively making our users’ work easier. Underneath Duplo uses Redis to store its indexes enabling very fast lookups, while Kafka and a Storm topology continuously feed it with all changes.