As a Generative AI Engineer, you are the architect of intelligence. You are a hands-on builder who lives and breathes the latest in foundation models, agentic systems, and advanced RAG pipelines. Your mission is to move beyond simple chatbot demos and create robust, production-grade GenAI systems that solve our most complex business challenges. You´re equally comfortable fine-tuning a Llama model on proprietary data, designing a multi-agent workflow, or serving a low-latency API endpoint to tens of thousands of users. You don’t just use AI; you build the engine that powers it.
What You´ll Do:
- Architect & Build Advanced GenAI Systems: Design and implement sophisticated RAG (Retrieval-Augmented Generation) pipelines from the ground up. You will own the full process, from chunking strategies and embedding models to vector databases and retrieval logic.
- Create Agentic Workflows: Build and orchestrate multi-agent systems (e.g., using frameworks like LangGraph) that can reason, plan, and execute complex, multi-step tasks to automate entire business processes.
- Fine-Tune & Optimize LLMs: Fine-tune open-source Large Language Models (e.g., Llama 3, Qwen, Mistral) on our unique, proprietary datasets to create highly specialized, cost-effective, and powerful models that give us a competitive edge.
- Develop Production-Grade AI Services: Engineer the backend for our AI products. You will transform complex GenAI logic into robust, scalable, and low-latency microservices and APIs using Python (FastAPI, etc.).
- End-to-End Ownership: You will be responsible for the entire GenAI lifecycle: from initial data processing and prototyping to rigorous testing, deployment, monitoring (e.g., hallucination tracking, token usage, latency), and continuous improvement in a true ´you build it, you run it´ culture.
What You´ll Bring:
- 3-5+ years of hands-on experience in a Software Engineering or ML Engineering role, with a strong, demonstrable focus on building production systems.
- Deep, practical experience with the Generative AI ecosystem. You must be able to talk in detail about projects where you´ve built and deployed complex RAG pipelines or fine-tuned LLMs.
- Expert-level proficiency in Python and its core ML ecosystem (e.g., PyTorch, Hugging Face Transformers, LangChain/LlamaIndex).
- Hands-on experience with vector databases (e.g., Pinecone, Weaviate, ChromaDB) and a deep understanding of embedding models and retrieval strategies.
- Solid software engineering fundamentals are non-negotiable: you live and breathe Docker, Git, CI/CD, and building clean, testable RESTful APIs.
- A pragmatic engineering mindset: you know the trade-offs between different models, when to use a commercial API vs. an open-source model, and how to balance performance with cost and security.
- You are obsessed with solving problems, not just using cool technology.
What We Offer:
- A True ´You Build It, You Run It´ Culture: You´ll have end-to-end ownership of your models, from the first line of code to production monitoring.
- A Dynamic Fintech Environment: Where a new challenge and a new idea are born every day.
- State-of-the-Art Tools: Access to the best-in-class tools and platforms (Jira, Confluence, Miro, Figma, ChatGPT, and the latest cloud services).
- Fast Decision-Making, Flat Organization: No unnecessary bureaucracy, your voice and ideas will be heard.
- A Team of A-Players: Work with and learn from highly skilled, supportive colleagues in a culture of professional debate.
- Continuous Growth: Full support for attending trainings, workshops, and conferences to stay at the cutting edge of AI.
- Competitive & Fair Compensation: We believe that great work deserves to be recognized and rewarded.