Data and AI Solutions Engineer
2026-05-05T08:38:03+00:00
Reignova Technologies Company Limited
https://cdn.greattanzaniajobs.com/jsjobsdata/data/employer/comp_6796/logo/Reignova%20Technologies%20Company%20Limited.png
https://www.reignovatech.com/
FULL_TIME
Dar es Salaam
Dar es Salaam
00000
Tanzania
Information Technology
Computer & IT, Science & Engineering
2026-05-14T17:00:00+00:00
8
As a Data & AI Solutions Engineer at Reignova, you are the professional who turns raw enterprise data into intelligent, decision-making systems that clients can act on immediately. You engineer the data pipelines that power AI and then build the AI solutions that extract value from that data. From architecting real-time data platforms for organizations, to deploying Generative AI applications that automate workflows, to building RAG systems that transform enterprise knowledge into competitive advantage, your work produces client solutions that are visible, measurable, and transformative.
Key Responsibilities
Data Engineering & Pipeline Architecture
- Design, build, and maintain scalable data pipelines and ETL/ELT workflows that ingest, transform, and deliver high-quality structured and unstructured data from diverse enterprise sources including databases, APIs, IoT streams, and third-party platforms
- Architect enterprise data platforms using modern lake house and warehouse solutions (Azure Synapse Analytics, Databricks, Google BigQuery, AWS Redshift) to support both operational analytics and AI model training at scale
- Implement real-time and batch data processing frameworks using Apache Spark, Kafka, Azure Data Factory, and equivalent tools, ensuring data is fresh, reliable, and available for both business intelligence and AI consumption
- Design and enforce data governance, data quality, and master data management frameworks that ensure data assets meet the accuracy, completeness, and compliance requirements of regulated enterprise clients in banking, telecom, and government
- Build and manage data catalogues, metadata management systems, and data lineage tracking to provide full visibility into how data flows from source to insight across client environments
AI, Generative AI & Automation Engineering
- Design and build production-grade Generative AI applications using Azure OpenAI Service, OpenAI API, and Google Vertex AI that leverage structured and unstructured enterprise data to deliver intelligent, data-driven decision support for banking, telecom, and government clients
- Architect Retrieval-Augmented Generation (RAG) systems that connect vector databases (Pinecone, Chroma, Azure AI Search) to curated enterprise data pipelines, enabling AI systems to provide accurate, context-aware responses grounded in real organizational data
- Develop Python-based AI automation pipelines for data ingestion, preprocessing, LLM chaining, output parsing, and post-processing, ensuring end-to-end workflows are production-ready, maintainable, and observable across enterprise cloud infrastructure
- Design and deploy intelligent automation workflows using n8n, Make (Integromat), and Azure Prompt Flow that integrate LLMs with enterprise data sources, CRMs, ERPs, and communication platforms to automate complex, data-intensive business processes
- Build and deploy machine learning models for predictive analytics use cases including fraud detection, customer churn prediction, demand forecasting, and process optimization using data engineered specifically for model consumption
- Engineer advanced prompt engineering frameworks, evaluation pipelines, and responsible AI guardrails for domain-specific LLM applications, ensuring AI outputs are accurate, auditable, and aligned with client compliance requirements
- Stay at the leading edge of both data engineering and AI/LLM research, translating emerging capabilities (GPT-4o, Gemini, Llama, new vector DB architectures) into practical, data-grounded client solutions
Client Delivery & Collaboration
- Collaborate with solution architects, business analysts, and client stakeholders to define data and AI use cases, scope data readiness requirements, and deliver measurable ROI from data and AI investments
- Translate complex data architectures and AI capabilities into clear business value narratives for C-suite and technical stakeholders within enterprise and government client organizations
- Support pre-sales engagements by contributing data and AI expertise to solution design, technical proposals, and proof-of-concept demonstrations for prospective clients
Required Qualifications & Experience
- Minimum 3+ years of combined professional experience spanning both data engineering and AI/ML or Generative AI systems, with demonstrable production deployments in both disciplines
- Strong data engineering proficiency: designing ETL/ELT pipelines, working with distributed data processing frameworks (Apache Spark, Kafka), and managing cloud data platforms (Azure Synapse, Databricks, BigQuery, or Redshift)
- Proven hands-on experience with Azure OpenAI, OpenAI API, or Google Vertex AI — beyond experimentation, including production LLM deployments integrated with real enterprise data sources
- Strong Python programming skills applied across both data engineering (pandas, PySpark, data transformation pipelines) and AI contexts (LLM API calls, prompt management, RAG pipelines, model inference)
- Deep familiarity with LLM orchestration frameworks including LangChain, LlamaIndex, Semantic Kernel, or Azure Prompt Flow, with experience integrating these into data-rich enterprise environments
- Solid understanding of vector databases, embedding models, and RAG architecture design, with experience connecting these systems to engineered enterprise data assets
- Minimum 2+ years of hands-on experience building automation workflows using n8n and/or Make (Integromat), including multi-step workflows with conditional logic, error handling, and API integrations
- Experience with data governance principles, data quality frameworks, and compliance requirements applicable to regulated industries such as banking, insurance, and government
- Ability to communicate complex data and AI solutions to non-technical enterprise stakeholders, translating technical architecture into clear business value and ROI narratives
Minimum Education Requirement
- Bachelor’s Degree – Required
Candidates must hold a minimum of a Bachelor’s degree in Computer Science, Artificial Intelligence, Data Science, Mathematics, Statistics, Software Engineering, Information Technology, or a closely related quantitative field from a recognized and accredited university or institution. Applications without a qualifying degree will not be considered.
- Added Advantage: Master’s degree in Artificial Intelligence, Machine Learning, Data Engineering, Data Science, or Computational Linguistics. Open-source contributions in AI or data engineering are strong differentiators.
Required & Recommended Certifications
Candidates must hold at minimum one of the mandatory certifications listed below. The breadth of recommended certifications reflects the dual scope of this role. Both data engineering depth and AI engineering capability are assessed at interview.
- Microsoft Azure AI Engineer Associate (AI-102)
- Google Professional Machine Learning Engineer
- Azure Data Scientist Associate (DP-100)
- AWS Certified Machine Learning – Specialty
- Google Professional Data Engineer
- Microsoft Azure Data Engineer Associate (DP-203)
- Microsoft Azure Fundamentals (AZ-900)
- Python Institute PCAP – Certified Associate in Python
Preferred Skills
- Azure Prompt Flow & Semantic Kernel
- LangChain / LlamaIndex / LlamaHub
- Pinecone / Chroma / Weaviate / Qdrant (Vector DBs)
- Fine-tuning techniques (LoRA / PEFT / QLoRA)
- Multi-agent AI systems (AutoGen, CrewAI)
- dbt (Data Build Tool) for data transformation
- Apache Airflow for workflow orchestration
- MLOps & CI/CD for AI (MLflow, Azure ML Pipelines)
- Data visualization (Power BI, Looker, Tableau)
- Low-code AI (Power Automate + AI Builder)
- Real-time streaming analytics (Kafka, Azure Event Hubs)
- A rare professional who thinks in data pipelines and speaks in AI models, you understand that the quality of an AI system is only as good as the data that feeds it, and you are equally skilled at engineering both
- Intellectually restless – you are obsessed with what becomes possible when clean, governed, real-time enterprise data meets the latest large language models and machine learning architectures
- Builder mentality – you move from data architecture whiteboard to working AI prototype rapidly, then iterate relentlessly until the solution is production-ready and generating measurable client value
- Comfortable operating in ambiguity: both data and AI use cases at enterprise clients are rarely fully defined at the start, and you thrive in shaping clarity from complexity
- Strong communicator who can translate both data architecture decisions and AI output capabilities into business value narratives for C-suite executives, IT directors, and government stakeholders
- Committed to responsible, ethical AI and data practices – you are acutely aware of the governance, privacy, and societal implications of AI and data systems in developing-economy contexts
- Proactive learner who keeps pace with the rapidly evolving landscape of both modern data engineering tooling and generative AI advances, and brings new capabilities to client conversations before clients even know to ask
- Design, build, and maintain scalable data pipelines and ETL/ELT workflows that ingest, transform, and deliver high-quality structured and unstructured data from diverse enterprise sources including databases, APIs, IoT streams, and third-party platforms
- Architect enterprise data platforms using modern lakehouse and warehouse solutions (Azure Synapse Analytics, Databricks, Google BigQuery, AWS Redshift) to support both operational analytics and AI model training at scale
- Implement real-time and batch data processing frameworks using Apache Spark, Kafka, Azure Data Factory, and equivalent tools, ensuring data is fresh, reliable, and available for both business intelligence and AI consumption
- Design and enforce data governance, data quality, and master data management frameworks that ensure data assets meet the accuracy, completeness, and compliance requirements of regulated enterprise clients in banking, telecom, and government
- Build and manage data catalogues, metadata management systems, and data lineage tracking to provide full visibility into how data flows from source to insight across client environments
- Design and build production-grade Generative AI applications using Azure OpenAI Service, OpenAI API, and Google Vertex AI that leverage structured and unstructured enterprise data to deliver intelligent, data-driven decision support for banking, telecom, and government clients
- Architect Retrieval-Augmented Generation (RAG) systems that connect vector databases (Pinecone, Chroma, Azure AI Search) to curated enterprise data pipelines, enabling AI systems to provide accurate, context-aware responses grounded in real organizational data
- Develop Python-based AI automation pipelines for data ingestion, preprocessing, LLM chaining, output parsing, and post-processing, ensuring end-to-end workflows are production-ready, maintainable, and observable across enterprise cloud infrastructure
- Design and deploy intelligent automation workflows using n8n, Make (Integromat), and Azure Prompt Flow that integrate LLMs with enterprise data sources, CRMs, ERPs, and communication platforms to automate complex, data-intensive business processes
- Build and deploy machine learning models for predictive analytics use cases including fraud detection, customer churn prediction, demand forecasting, and process optimization using data engineered specifically for model consumption
- Engineer advanced prompt engineering frameworks, evaluation pipelines, and responsible AI guardrails for domain-specific LLM applications, ensuring AI outputs are accurate, auditable, and aligned with client compliance requirements
- Stay at the leading edge of both data engineering and AI/LLM research, translating emerging capabilities (GPT-4o, Gemini, Llama, new vector DB architectures) into practical, data-grounded client solutions
- Collaborate with solution architects, business analysts, and client stakeholders to define data and AI use cases, scope data readiness requirements, and deliver measurable ROI from data and AI investments
- Translate complex data architectures and AI capabilities into clear business value narratives for C-suite and technical stakeholders within enterprise and government client organizations
- Support pre-sales engagements by contributing data and AI expertise to solution design, technical proposals, and proof-of-concept demonstrations for prospective clients
- Azure Prompt Flow & Semantic Kernel
- LangChain / LlamaIndex / LlamaHub
- Pinecone / Chroma / Weaviate / Qdrant (Vector DBs)
- Fine-tuning techniques (LoRA / PEFT / QLoRA)
- Multi-agent AI systems (AutoGen, CrewAI)
- dbt (Data Build Tool) for data transformation
- Apache Airflow for workflow orchestration
- MLOps & CI/CD for AI (MLflow, Azure ML Pipelines)
- Data visualization (Power BI, Looker, Tableau)
- Low-code AI (Power
- Minimum 3+ years of combined professional experience spanning both data engineering and AI/ML or Generative AI systems, with demonstrable production deployments in both disciplines
- Strong data engineering proficiency: designing ETL/ELT pipelines, working with distributed data processing frameworks (Apache Spark, Kafka), and managing cloud data platforms (Azure Synapse, Databricks, BigQuery, or Redshift)
- Proven hands-on experience with Azure OpenAI, OpenAI API, or Google Vertex AI — beyond experimentation, including production LLM deployments integrated with real enterprise data sources
- Strong Python programming skills applied across both data engineering (pandas, PySpark, data transformation pipelines) and AI contexts (LLM API calls, prompt management, RAG pipelines, model inference)
- Deep familiarity with LLM orchestration frameworks including LangChain, LlamaIndex, Semantic Kernel, or Azure Prompt Flow, with experience integrating these into data-rich enterprise environments
- Solid understanding of vector databases, embedding models, and RAG architecture design, with experience connecting these systems to engineered enterprise data assets
- Minimum 2+ years of hands-on experience building automation workflows using n8n and/or Make (Integromat), including multi-step workflows with conditional logic, error handling, and API integrations
- Experience with data governance principles, data quality frameworks, and compliance requirements applicable to regulated industries such as banking, insurance, and government
- Ability to communicate complex data and AI solutions to non-technical enterprise stakeholders, translating technical architecture into clear business value and ROI narratives
JOB-69f9ac6b7dd64
Vacancy title:
Data and AI Solutions Engineer
[Type: FULL_TIME, Industry: Information Technology, Category: Computer & IT, Science & Engineering]
Jobs at:
Reignova Technologies Company Limited
Deadline of this Job:
Thursday, May 14 2026
Duty Station:
Dar es Salaam | Dar es Salaam
Summary
Date Posted: Tuesday, May 5 2026, Base Salary: Not Disclosed
Similar Jobs in Tanzania
Learn more about Reignova Technologies Company Limited
Reignova Technologies Company Limited jobs in Tanzania
JOB DETAILS:
As a Data & AI Solutions Engineer at Reignova, you are the professional who turns raw enterprise data into intelligent, decision-making systems that clients can act on immediately. You engineer the data pipelines that power AI and then build the AI solutions that extract value from that data. From architecting real-time data platforms for organizations, to deploying Generative AI applications that automate workflows, to building RAG systems that transform enterprise knowledge into competitive advantage, your work produces client solutions that are visible, measurable, and transformative.
Key Responsibilities
Data Engineering & Pipeline Architecture
- Design, build, and maintain scalable data pipelines and ETL/ELT workflows that ingest, transform, and deliver high-quality structured and unstructured data from diverse enterprise sources including databases, APIs, IoT streams, and third-party platforms
- Architect enterprise data platforms using modern lake house and warehouse solutions (Azure Synapse Analytics, Databricks, Google BigQuery, AWS Redshift) to support both operational analytics and AI model training at scale
- Implement real-time and batch data processing frameworks using Apache Spark, Kafka, Azure Data Factory, and equivalent tools, ensuring data is fresh, reliable, and available for both business intelligence and AI consumption
- Design and enforce data governance, data quality, and master data management frameworks that ensure data assets meet the accuracy, completeness, and compliance requirements of regulated enterprise clients in banking, telecom, and government
- Build and manage data catalogues, metadata management systems, and data lineage tracking to provide full visibility into how data flows from source to insight across client environments
AI, Generative AI & Automation Engineering
- Design and build production-grade Generative AI applications using Azure OpenAI Service, OpenAI API, and Google Vertex AI that leverage structured and unstructured enterprise data to deliver intelligent, data-driven decision support for banking, telecom, and government clients
- Architect Retrieval-Augmented Generation (RAG) systems that connect vector databases (Pinecone, Chroma, Azure AI Search) to curated enterprise data pipelines, enabling AI systems to provide accurate, context-aware responses grounded in real organizational data
- Develop Python-based AI automation pipelines for data ingestion, preprocessing, LLM chaining, output parsing, and post-processing, ensuring end-to-end workflows are production-ready, maintainable, and observable across enterprise cloud infrastructure
- Design and deploy intelligent automation workflows using n8n, Make (Integromat), and Azure Prompt Flow that integrate LLMs with enterprise data sources, CRMs, ERPs, and communication platforms to automate complex, data-intensive business processes
- Build and deploy machine learning models for predictive analytics use cases including fraud detection, customer churn prediction, demand forecasting, and process optimization using data engineered specifically for model consumption
- Engineer advanced prompt engineering frameworks, evaluation pipelines, and responsible AI guardrails for domain-specific LLM applications, ensuring AI outputs are accurate, auditable, and aligned with client compliance requirements
- Stay at the leading edge of both data engineering and AI/LLM research, translating emerging capabilities (GPT-4o, Gemini, Llama, new vector DB architectures) into practical, data-grounded client solutions
Client Delivery & Collaboration
- Collaborate with solution architects, business analysts, and client stakeholders to define data and AI use cases, scope data readiness requirements, and deliver measurable ROI from data and AI investments
- Translate complex data architectures and AI capabilities into clear business value narratives for C-suite and technical stakeholders within enterprise and government client organizations
- Support pre-sales engagements by contributing data and AI expertise to solution design, technical proposals, and proof-of-concept demonstrations for prospective clients
Required Qualifications & Experience
- Minimum 3+ years of combined professional experience spanning both data engineering and AI/ML or Generative AI systems, with demonstrable production deployments in both disciplines
- Strong data engineering proficiency: designing ETL/ELT pipelines, working with distributed data processing frameworks (Apache Spark, Kafka), and managing cloud data platforms (Azure Synapse, Databricks, BigQuery, or Redshift)
- Proven hands-on experience with Azure OpenAI, OpenAI API, or Google Vertex AI — beyond experimentation, including production LLM deployments integrated with real enterprise data sources
- Strong Python programming skills applied across both data engineering (pandas, PySpark, data transformation pipelines) and AI contexts (LLM API calls, prompt management, RAG pipelines, model inference)
- Deep familiarity with LLM orchestration frameworks including LangChain, LlamaIndex, Semantic Kernel, or Azure Prompt Flow, with experience integrating these into data-rich enterprise environments
- Solid understanding of vector databases, embedding models, and RAG architecture design, with experience connecting these systems to engineered enterprise data assets
- Minimum 2+ years of hands-on experience building automation workflows using n8n and/or Make (Integromat), including multi-step workflows with conditional logic, error handling, and API integrations
- Experience with data governance principles, data quality frameworks, and compliance requirements applicable to regulated industries such as banking, insurance, and government
- Ability to communicate complex data and AI solutions to non-technical enterprise stakeholders, translating technical architecture into clear business value and ROI narratives
Minimum Education Requirement
- Bachelor’s Degree – Required
Candidates must hold a minimum of a Bachelor’s degree in Computer Science, Artificial Intelligence, Data Science, Mathematics, Statistics, Software Engineering, Information Technology, or a closely related quantitative field from a recognized and accredited university or institution. Applications without a qualifying degree will not be considered.
- Added Advantage: Master’s degree in Artificial Intelligence, Machine Learning, Data Engineering, Data Science, or Computational Linguistics. Open-source contributions in AI or data engineering are strong differentiators.
Required & Recommended Certifications
Candidates must hold at minimum one of the mandatory certifications listed below. The breadth of recommended certifications reflects the dual scope of this role. Both data engineering depth and AI engineering capability are assessed at interview.
- Microsoft Azure AI Engineer Associate (AI-102)
- Google Professional Machine Learning Engineer
- Azure Data Scientist Associate (DP-100)
- AWS Certified Machine Learning – Specialty
- Google Professional Data Engineer
- Microsoft Azure Data Engineer Associate (DP-203)
- Microsoft Azure Fundamentals (AZ-900)
- Python Institute PCAP – Certified Associate in Python
Preferred Skills
- Azure Prompt Flow & Semantic Kernel
- LangChain / LlamaIndex / LlamaHub
- Pinecone / Chroma / Weaviate / Qdrant (Vector DBs)
- Fine-tuning techniques (LoRA / PEFT / QLoRA)
- Multi-agent AI systems (AutoGen, CrewAI)
- dbt (Data Build Tool) for data transformation
- Apache Airflow for workflow orchestration
- MLOps & CI/CD for AI (MLflow, Azure ML Pipelines)
- Data visualization (Power BI, Looker, Tableau)
- Low-code AI (Power Automate + AI Builder)
- Real-time streaming analytics (Kafka, Azure Event Hubs)
- A rare professional who thinks in data pipelines and speaks in AI models, you understand that the quality of an AI system is only as good as the data that feeds it, and you are equally skilled at engineering both
- Intellectually restless – you are obsessed with what becomes possible when clean, governed, real-time enterprise data meets the latest large language models and machine learning architectures
- Builder mentality – you move from data architecture whiteboard to working AI prototype rapidly, then iterate relentlessly until the solution is production-ready and generating measurable client value
- Comfortable operating in ambiguity: both data and AI use cases at enterprise clients are rarely fully defined at the start, and you thrive in shaping clarity from complexity
- Strong communicator who can translate both data architecture decisions and AI output capabilities into business value narratives for C-suite executives, IT directors, and government stakeholders
- Committed to responsible, ethical AI and data practices – you are acutely aware of the governance, privacy, and societal implications of AI and data systems in developing-economy contexts
- Proactive learner who keeps pace with the rapidly evolving landscape of both modern data engineering tooling and generative AI advances, and brings new capabilities to client conversations before clients even know to ask
Work Hours: 8
Experience in Months: 36
Level of Education: bachelor degree
Job application procedure
Interested in applying for this job? Click here to submit your application now.
send your cv
All Jobs | QUICK ALERT SUBSCRIPTION