Articles

Destroyer of Jobs, Creator of Better Ones

AI job displacement dominates the headlines, but what does the data actually say? This second part of the Europe trilogy examines the real risks and argues that Europe has unique tools to turn disruption into opportunity.

3/14/2026
12 min read

The Internet is (Deeply) Fake

As AI-generated misinformation floods the internet, Europe faces a critical moment. This article explores how the EU can protect democracy through digital literacy, stronger regulation, and using harmful technology as a force for good.

1/19/2026
10 min read

A Culinary Introduction to AI's rumored next frontier

World models are everywhere in AI headlines, touted as our path to human-like intelligence. But what are they? This article traces their journey from psychology labs—where rats learned mazes without rewards—to self-driving cars and robotics. You'll learn how these systems predict and plan, why they might learn more like humans than other AI approaches, and how they are used in the world around us today.

12/24/2025
15 min read

We should start working on Artificial Intelligence again

It seems impossible to go on LinkedIn without encountering an AI expert within your first minute of scrolling. We're riding an AI hype-cycle at full force, yet most people can't tell you what artificial intelligence actually is. Here's the truth: most systems branded as 'AI' today are sophisticated pattern-matchers with no real intelligence. Market pressures have diverted research from AI's founding goal—recreating human intelligence. This article explores what real AI is, why we abandoned it, and which research approaches might actually get us there.

11/25/2025
20 min read

Developing Large Language Models for Quantum Chemistry Simulation Input Generation

Scientists often struggle with Domain-Specific Languages (DSLs) in fields like computational chemistry. In this paper, we propose a general framework for using Large Language Models (LLMs) to generate DSL code, using the ORCA quantum chemistry package as a case study. This framework includes various prompt engineering techniques, finetuning with different synthetic datasets, and use of retrieval augmented generation. Our framework significantly boosted LLM performance, even surpassing the GPT-4o model when adopting the less powerful GPT 3.5 model as base.

9/1/2024
20 min read

Active learning for reducing labeling effort in text classification tasks

Labelling data manually is very time consuming and costly. In this study, we explore how to reduce this labelling effort through use of active learning. We compare different uncertainty-based algorithms to select a subset of data on which, in hopes of achieving similar performance to using the full dataset, to train BERTbase on in a text classification setting. Moreover we propose and examine heuristics to address scalability and outlier selection issues linked to pool-based active learning. While these heuristics didn’t improve AL, our findings show that uncertainty-based AL outperforms random sampling, though the performance gap narrows as query-pool sizes increase.

10/3/2021
30 min read