Skaff is an incubator
for Artefact technical products

Elevate technical delivery standards.
Improve commercial success.
Consolidate artefact as leading data.

Skaff provides open source knowledge and deployable solutions to solve foundational technical problems

There is a significant technical overhead before data and AI projects are able to demonstrate value

Skaff recognizes this foundational work and builds high quality accelerators to streamline build and deployment, to allow teams to focus on value-add work.

Knowledge
Packs

Get a running start

Speed up your onboarding on a technology or expertise by walking through one of our knowledge packs.

What is in the box?

Knowledge packs include a 45 minutes hands-on,
and our collective convictions on how to
approach the subject.

Try one

Deployable packages

Cut through the boilerplate

Accelerate the development and industrialization of data projects by using on the shelf open source software.

What does it look like?

These accelerators can be Python packages,
Terraform modules, Git repository templates,
dashboard wireframes, and much more.

Try one

Fast track your Gen AI app using
our Langchain industrialization kit.

Success stories

Private Equity

By leveraging Skaff’s GenAI accelerators, an Artefact team quickly demonstrated the large time savings that could be achieved by indexing and querying unstructured data for M&A.

Analysts were able to ask questions about due diligence documents, market studies, expert interviews, and other reports in natural language. This allows cross referencing information easily, greatly improving productivity.

Greenlit to scale to 1500 users.

Consumer beauty

When building a data platform to support marketing use cases, using Skaff accelerators fast-tracked the deployment of data lakes, data pipelines, access control, finops, and data governance.

With this taken care of in a matter of days instead of weeks or months, data engineers were able to focus on building data products and serve strategic use cases for the brand.

RETAIL

To analyze data streaming from points of sales in a fraud detection use case, Skaff’s dbt server accelerator was used to deploy and schedule analytics pipeline.

This allowed the Artefact team to quickly gain insights on fraud detection events and other incidents

Having this accelerator ready to go allowed them to shave off weeks of development time and focus on their product.

Meet the SKAFF Staff

Alexis Vialaret
Robin Doumerc

Medium blog articles by our tech experts

The era of generative AI: What’s changing

The era of generative AI: What’s changing

The abundance and diversity of responses to ChatGPT and other generative AIs, whether skeptical or enthusiastic, demonstrate the changes they're bringing about and the impact...

How Artefact managed to develop a fair yet simple career system for software engineers

How Artefact managed to develop a fair yet simple career system for software engineers

In today’s dynamic and ever-evolving tech industry, a career track can often feel like a winding path through a dense forest of opportunities. With rapid...

Why you need LLMOps

Why you need LLMOps

This article introduces LLMOps, a specialised branch merging DevOps and MLOps for managing the challenges posed by Large Language Models (LLMs)...

Unleashing the Power of LangChain Expression Language (LCEL): from proof of concept to production

Unleashing the Power of LangChain Expression Language (LCEL): from proof of concept to production

LangChain has become one of the most used Python library to interact with LLMs in less than a year, but LangChain was mostly a library...

How we handled profile ID reconciliation using Treasure Data Unification and SQL

How we handled profile ID reconciliation using Treasure Data Unification and SQL

In this article we explain the challenges of ID reconciliation and demonstrate our approach to create a unified profile ID in Customer Data Platform, specifically...

Snowflake’s Snowday ’23: Snowballing into Data Science Success

Snowflake’s Snowday ’23: Snowballing into Data Science Success

As we reflect on the insights shared during the ‘Snowday’ event on November 1st and 2nd, a cascade of exciting revelations about the future of...

How we interview and hire software engineers at Artefact

How we interview and hire software engineers at Artefact

We go through the skills we are looking for, the different steps of the process, and the commitments we make to all candidates.

Encoding categorical features in forecasting: are we all doing it wrong?

Encoding categorical features in forecasting: are we all doing it wrong?

We propose a novel method for encoding categorical features specifically tailored for forecasting applications.

How we deployed a simple wildlife monitoring system on Google Cloud

How we deployed a simple wildlife monitoring system on Google Cloud

We collaborated with Smart Parks, a Dutch company that provides advanced sensor solutions to conserve endangered wildlife...

Deploying Stable Diffusion on Vertex AI

Deploying Stable Diffusion on Vertex AI

This article provides a guide for deploying Stable Diffusion model, a popular image generation model, on Google Cloud using Vertex AI.

All you need to know to get started with Vertex AI Pipelines

All you need to know to get started with Vertex AI Pipelines

Presentation of a tool that demonstrates, practically, our experience using Vertex AI Pipelines in a project running in production.

dbt coalesce 2022 recap

dbt coalesce 2022 recap

The edition of dbt coalesce was taking place in New Orleans. And we learned a ton about the analytics engineering landscape.

Snowflake access control at scale

Snowflake access control at scale

Snowflake | How we automated the management of an account with more than 50 users while complying with data governance standards

Forecasting something that never happened: how we estimated past promotions profitability

Forecasting something that never happened: how we estimated past promotions profitability

A guide on how to use counterfactual forecasting to estimate the cost-effectiveness of past in-store promotions in retail.

Bayesian Media Mix Modeling with limited data

Bayesian Media Mix Modeling with limited data

How to estimate the impact of channels between Sales and Marketing? The Media Mix Modeling is the solution, Statistics are the main resource.

Measuring the CO2eq impact of your Python Notebook (Azure ML)

Measuring the CO2eq impact of your Python Notebook (Azure ML)

After my 1st story on code optimisation to reduce my computing time by 90%, I was interested in knowing the CO2eq impact saved by my...

A manifesto to include ML Engineers in your data science projects from day 1

A manifesto to include ML Engineers in your data science projects from day 1

Jeffrey Kane, Senior Data Scientist, explains why ML Engineer should be in your data science projects from day one.

What does the future of data engineering look like?

What does the future of data engineering look like?

The field and future of data engineering is evolving quickly. Discover 3 major trends I see become prominent in the coming years.

Is Facebook Prophet suited for doing good predictions in a real-world project?

Is Facebook Prophet suited for doing good predictions in a real-world project?

This guide will help you figure whether Facebook Prophet is appropriate or not for your forecasting project.

String filters in pandas: you’re doing it wrong

String filters in pandas: you’re doing it wrong

String filters in Pandas is something you should avoid as the scalar_compare operator leads to performance bottlenecks.

Data & ML challenges for 2022

Data & ML challenges for 2022

Key 2021 data & ML trends… and what they mean for 2022

How to quickly compare two datasets using a generic & powerful SQL query

How to quickly compare two datasets using a generic & powerful SQL query

A step-by-step guide to ease datasets comparison via a ready-to-use Structured Query Language template

Scoring Customer Propensity using Machine Learning Models on Google Analytics Data

Scoring Customer Propensity using Machine Learning Models on Google Analytics Data

A deep-dive on how we built state of the art custom machine learning models to estimate customer propensity to buy a product using Google Analytics...

The path to developing a high-performance demand forecasting model - Part 4

The path to developing a high-performance demand forecasting model - Part 4

Until now we have mainly talked about forecasting regular products that have been on the shelf for quite some time. But what about products that...