@bluelibs/runner - v3.0.0

BlueLibs Runner: The Framework That Actually Makes Sense

Or: How I Learned to Stop Worrying and Love Dependency Injection

Build Status Coverage Status Docs GitHub

Welcome to BlueLibs Runner, where we've taken the chaos of modern application architecture and turned it into something that won't make you question your life choices at 3am. This isn't just another framework – it's your new best friend who actually understands that code should be readable, testable, and not require a PhD in abstract nonsense to maintain.

BlueLibs Runner is a TypeScript-first framework that embraces functional programming principles while keeping dependency injection simple enough that you won't need a flowchart to understand your own code. Think of it as the anti-framework framework – it gets out of your way and lets you build stuff that actually works.

  • Tasks are functions - Not classes with 47 methods you'll never use
  • Resources are singletons - Database connections, configs, services - the usual suspects
  • Events are just events - Revolutionary concept, we know
  • Everything is async - Because it's 2025 and blocking code is so 2005
  • Explicit beats implicit - No magic, no surprises, no "how the hell does this work?"
npm install @bluelibs/runner

Here's a complete Express server in less lines than most frameworks need for their "Hello World":

import express from "express";
import { resource, task, run } from "@bluelibs/runner";

// A resource is anything you want to share across your app
const server = resource({
id: "app.server",
init: async (config: { port: number }) => {
const app = express();
const server = app.listen(config.port);
console.log(`Server running on port ${config.port}`);
return { app, server };
},
dispose: async ({ server }) => server.close(),
});

// Tasks are your business logic - pure-ish, easily testable functions
const createUser = task({
id: "app.tasks.createUser",
dependencies: { server },
run: async (userData: { name: string }, { server }) => {
// Your actual business logic here
return { id: "user-123", ...userData };
},
});

// Wire everything together
const app = resource({
id: "app",
// Here you make the system aware of resources, tasks, middleware, and events.
register: [server.with({ port: 3000 }), createUser],
dependencies: { server, createUser },
init: async (_, { server, createUser }) => {
server.app.post("/users", async (req, res) => {
const user = await createUser(req.body);
res.json(user);
});
},
});

// That's it. No webpack configs, no decorators, no XML.
const { dispose } = await run(app);

Tasks are functions with superpowers. They're pure-ish, testable, and composable. Unlike classes that accumulate methods like a hoarder accumulates stuff, tasks do one thing well.

const sendEmail = task({
id: "app.tasks.sendEmail",
dependencies: { emailService, logger },
run: async ({ to, subject, body }: EmailData, { emailService, logger }) => {
await logger.info(`Sending email to ${to}`);
return await emailService.send({ to, subject, body });
},
});

// Test it like a normal function (because it basically is)
const result = await sendEmail.run(
{ to: "user@example.com", subject: "Hi", body: "Hello!" },
{ emailService: mockEmailService, logger: mockLogger }
);

Look, we get it. You could turn every function into a task, but that's like using a sledgehammer to crack nuts. Here's the deal:

Make it a task when:

  • It's a high-level business action: "app.user.register", "app.order.process"
  • You want it trackable and observable
  • Multiple parts of your app need it
  • It's complex enough to benefit from dependency injection

Don't make it a task when:

  • It's a simple utility function
  • It's used in only one place or to help other tasks
  • It's performance-critical and doesn't need DI overhead

Think of tasks as the "main characters" in your application story, not every single line of dialogue.

Resources are the services, configs, and connections that live throughout your app's lifecycle. They initialize once and stick around until cleanup time. They have to be registered (via register: []) only once before they can be used.

const database = resource({
id: "app.db",
init: async () => {
const client = new MongoClient(process.env.DATABASE_URL as string);
await client.connect();

return client;
},
dispose: async (client) => await client.close(),
});

const userService = resource({
id: "app.services.user",
dependencies: { database },
init: async (_, { database }) => ({
async createUser(userData: UserData) {
return database.collection("users").insertOne(userData);
},
async getUser(id: string) {
return database.collection("users").findOne({ _id: id });
},
}),
});

Resources can be configured with type-safe options. No more "config object of unknown shape" nonsense.

type SMTPConfig = {
smtpUrl: string;
from: string;
};

const emailer = resource({
id: "app.emailer",
init: async (config: SMTPConfig) => ({
send: async (to: string, subject: string, body: string) => {
// Use config.smtpUrl and config.from
},
}),
});

// Register with specific config
const app = resource({
id: "app",
register: [
emailer.with({
smtpUrl: "smtp://localhost",
from: "noreply@myapp.com",
}),
// using emailer without with() will throw a type-error ;)
],
});

For cases where you need to share variables between init() and dispose() methods (because sometimes cleanup is complicated), use the enhanced context pattern:

const dbResource = resource({
id: "db.service",
context: () => ({
connections: new Map(),
pools: [],
}),
async init(config, deps, ctx) {
const db = await connectToDatabase();
ctx.connections.set("main", db);
ctx.pools.push(createPool(db));
return db;
},
async dispose(db, config, deps, ctx) {
// Same context available - no more "how do I access that thing I created?"
for (const pool of ctx.pools) {
await pool.drain();
}
for (const [name, conn] of ctx.connections) {
await conn.close();
}
},
});

Events let different parts of your app talk to each other without tight coupling. It's like having a really good office messenger who never forgets anything.

const userRegistered = event<{ userId: string; email: string }>({
id: "app.events.userRegistered",
});

const registerUser = task({
id: "app.tasks.registerUser",
dependencies: { userService, userRegistered },
run: async (userData, { userService, userRegistered }) => {
const user = await userService.createUser(userData);

// Tell the world about it
await userRegistered({ userId: user.id, email: user.email });
return user;
},
});

// Someone else handles the welcome email
const sendWelcomeEmail = task({
id: "app.tasks.sendWelcomeEmail",
on: userRegistered, // Listen to the event, notice the "on"
run: async (eventData) => {
// Everything is type-safe, automatically inferred from the 'on' property
console.log(`Welcome email sent to ${eventData.data.email}`);
},
});

Sometimes you need to be the nosy neighbor of your application:

const logAllEventsTask = task({
id: "app.tasks.logAllEvents",
on: "*", // Listen to EVERYTHING
run(event) {
console.log("Event detected", event.id, event.data);
// Note: Be careful with dependencies here since some events fire before initialization
},
});

Tasks and resources have their own lifecycle events that you can hook into:

const myTask = task({ ... });
const myResource = resource({ ... });
  • myTask.events.beforeRun - Fires before the task runs
  • myTask.events.afterRun - Fires after the task completes
  • myTask.events.onError - Fires when the task fails
  • myResource.events.beforeInit - Fires before the resource initializes
  • myResource.events.afterInit - Fires after the resource initializes
  • myResource.events.onError - Fires when the resource initialization fails

Each event has its own utilities and functions.

The framework comes with its own set of events that fire during the lifecycle. Think of them as the system's way of keeping you informed:

  • globals.tasks.beforeRun - "Hey, I'm about to run this task"
  • globals.tasks.afterRun - "Task completed, here's what happened"
  • globals.tasks.onError - "Oops, something went wrong"
  • globals.resources.beforeInit - "Initializing a resource"
  • globals.resources.afterInit - "Resource is ready"
  • globals.resources.onError - "Resource initialization failed"
const taskLogger = task({
id: "app.logging.taskLogger",
on: globalEvents.tasks.beforeRun,
run(event) {
console.log(`Running task: ${event.source} with input:`, event.data.input);
},
});

Middleware wraps around your tasks and resources, adding cross-cutting concerns without polluting your business logic.

// This is a middleware that accepts a config
const authMiddleware = middleware({
id: "app.middleware.auth",
// You can also add dependencies, no problem.
run: async (
{ task, next },
dependencies,
config: { requiredRole: string }
) => {
const user = task.input.user;
if (!user || user.role !== config.requiredRole) {
throw new Error("Unauthorized");
}
return next(task.input);
},
});

const adminTask = task({
id: "app.tasks.adminOnly",
// If the configuration accepts {} or is empty, .with() becomes optional, otherwise it becomes enforced.
middleware: [authMiddleware.with({ requiredRole: "admin" })],
run: async (input: { user: User }) => {
return "Secret admin data";
},
});

Want to add logging to everything? Authentication to all tasks? Global middleware has your back:

const logMiddleware = middleware({
id: "app.middleware.log",
run: async ({ task, next }) => {
console.log(`Executing: ${task.definition.id}`);
const result = await next(task.input);
console.log(`Completed: ${task.definition.id}`);
return result;
},
});

const app = resource({
id: "app",
register: [
logMiddleware.everywhere({ tasks: true, resources: false }), // Only tasks get logged
],
});

Ever tried to pass user data through 15 function calls? Yeah, we've been there. Context fixes that without turning your code into a game of telephone.

const UserContext = createContext<{ userId: string; role: string }>(
"app.userContext"
);

const getUserData = task({
id: "app.tasks.getUserData",
middleware: [UserContext.require()], // This is a middleware that ensures the context is available before task runs, throws if not.
run: async () => {
const user = UserContext.use(); // Available anywhere in the async chain
return `Current user: ${user.userId} (${user.role})`;
},
});

// Provide context at the entry point
const handleRequest = resource({
id: "app.requestHandler",
init: async () => {
return UserContext.provide({ userId: "123", role: "admin" }, async () => {
// All tasks called within this scope have access to UserContext
return await getUserData();
});
},
});

Context shines when combined with middleware for request-scoped data:

const RequestContext = createContext<{
requestId: string;
startTime: number;
userAgent?: string;
}>("app.requestContext");

const requestMiddleware = middleware({
id: "app.middleware.request",
run: async ({ task, next }) => {
// This works even in express middleware if needed.
return RequestContext.provide(
{
requestId: crypto.randomUUID(),
startTime: Date.now(),
userAgent: "MyApp/1.0",
},
async () => {
return next(task.input);
}
);
},
});

const handleRequest = task({
id: "app.handleRequest",
middleware: [requestMiddleware],
run: async (input: { path: string }) => {
const request = RequestContext.use();
console.log(`Processing ${input.path} (Request ID: ${request.requestId})`);
return { success: true, requestId: request.requestId };
},
});

When your app grows beyond "hello world", you'll want to group related dependencies. The index() helper is your friend - it's basically a 3-in-1 resource that registers, depends on, and returns everything you give it.

// This registers all services, depends on them, and returns them as one clean interface
const services = index({
userService,
emailService,
paymentService,
notificationService,
});

const app = resource({
id: "app",
register: [services],
dependencies: { services },
init: async (_, { services }) => {
// Access everything through one clean interface
const user = await services.userService.createUser(userData);
await services.emailService.sendWelcome(user.email);
},
});

Errors happen. When they do, you can listen for them and decide what to do. No more unhandled promise rejections ruining your day.

const riskyTask = task({
id: "app.tasks.risky",
run: async () => {
throw new Error("Something went wrong");
},
// Behind the scenes when you create a task() we create these 3 events for you (onError, beforeRun, afterRun)
});

const errorHandler = task({
id: "app.tasks.errorHandler",
on: riskyTask.events.onError,
run: async (event) => {
console.error("Task failed:", event.data.error);

// Don't let the error bubble up - this makes the task return undefined
event.data.suppress();
},
});

Because nobody likes waiting for the same expensive operation twice:

import { globals } from "@bluelibs/runner";

const expensiveTask = task({
id: "app.tasks.expensive",
middleware: [
globals.middleware.cache.with({
// lru-cache options by default
ttl: 60 * 1000, // Cache for 1 minute
keyBuilder: (taskId, input) => `${taskId}-${input.userId}`, // optional key builder
}),
],
run: async ({ userId }) => {
// This expensive operation will be cached
return await doExpensiveCalculation(userId);
},
});

// Global cache configuration
const app = resource({
id: "app.cache",
register: [
// You have to register it, cache resource is not enabled by default.
globals.resources.cache.with({
defaultOptions: {
max: 1000, // Maximum items in cache
ttl: 30 * 1000, // Default TTL
},
async: false, // in-memory is sync by default
// When using redis or others mark this as true to await response.
}),
],
});

Want Redis instead of the default LRU cache? No problem, just override the cache factory task:

import { task } from "@bluelibs/runner";

const redisCacheFactory = task({
id: "globals.tasks.cacheFactory", // Same ID as the default task
run: async (options: any) => {
return new RedisCache(options); // Make sure to turn async on in the cacher.
},
});

const app = resource({
id: "app",
register: [
// Your other stuff
],
overrides: [redisCacheFactory], // Override the default cache factory
});

The structured logging system that actually makes debugging enjoyable

BlueLibs Runner comes with a built-in logging system that's event-driven, structured, and doesn't make you hate your life when you're trying to debug at 2 AM. It emits events for everything, so you can handle logs however you want - ship them to your favorite log warehouse, pretty-print them to console, or ignore them entirely (we won't judge).

import { globals } from "@bluelibs/runner";

const businessTask = task({
id: "app.tasks.business",
dependencies: { logger: globals.resources.logger },
run: async (_, { logger }) => {
logger.info("Starting business process");
logger.warn("This might take a while");
logger.error("Oops, something went wrong", {
error: new Error("Database connection failed"),
});
logger.critical("System is on fire", {
data: { temperature: "9000°C" },
});
},
});

The logger supports six log levels with increasing severity:

Level Severity When to Use Color
trace 0 Ultra-detailed debugging info Gray
debug 1 Development and debugging information Cyan
info 2 General information about normal operations Green
warn 3 Something's not right, but still working Yellow
error 4 Errors that need attention Red
critical 5 System-threatening issues Magenta
// All log levels are available as methods
logger.trace("Ultra-detailed debugging info");
logger.debug("Development debugging");
logger.info("Normal operation");
logger.warn("Something's fishy");
logger.error("Houston, we have a problem");
logger.critical("DEFCON 1: Everything is broken");

The logger accepts rich, structured data that makes debugging actually useful:

const userTask = task({
id: "app.tasks.user.create",
dependencies: { logger: globals.resources.logger },
run: async (userData, { logger }) => {
// Basic message
logger.info("Creating new user");

// With structured data
logger.info("User creation attempt", {
data: {
email: userData.email,
registrationSource: "web",
timestamp: new Date().toISOString(),
},
});

// With error information
try {
const user = await createUser(userData);
logger.info("User created successfully", {
data: { userId: user.id, email: user.email },
});
} catch (error) {
logger.error("User creation failed", {
error,
data: {
attemptedEmail: userData.email,
validationErrors: error.validationErrors,
},
});
}
},
});

Create logger instances with bound context for consistent metadata across related operations:

const RequestContext = createContext<{ requestId: string; userId: string }>(
"app.requestContext"
);

const requestHandler = task({
id: "app.tasks.handleRequest",
dependencies: { logger: globals.resources.logger },
run: async (requestData, { logger }) => {
const request = RequestContext.use();

// Create a contextual logger with bound metadata
const requestLogger = logger.with({
requestId: request.requestId,
userId: request.userId,
source: "api.handler",
});

// All logs from this logger will include the bound context
requestLogger.info("Processing request", {
data: { endpoint: requestData.path },
});

requestLogger.debug("Validating input", {
data: { inputSize: JSON.stringify(requestData).length },
});

// Context is automatically included in all log events
requestLogger.error("Request processing failed", {
error: new Error("Invalid input"),
data: { stage: "validation" },
});
},
});

By default, logs are just events - they don't print to console unless you tell them to. Set a print threshold to automatically output logs at or above a certain level:

// Set up log printing (they don't print by default)
const setupLogging = task({
id: "app.logging.setup",
on: globals.resources.logger.events.afterInit,
run: async (event) => {
const logger = event.data.value;

// Print info level and above (info, warn, error, critical)
logger.setPrintThreshold("info");

// Print only errors and critical issues
logger.setPrintThreshold("error");

// Disable auto-printing entirely
logger.setPrintThreshold(null);
},
});

Every log generates an event that you can listen to. This is where the real power comes in:

// Ship logs to your favorite log warehouse
const logShipper = task({
id: "app.logging.shipper", // or pretty printer, or winston, pino bridge, etc.
on: globals.events.log,
run: async (event) => {
const log = event.data;

// Ship critical errors to PagerDuty
if (log.level === "critical") {
await pagerDuty.alert({
message: log.message,
details: log.data,
source: log.source,
});
}

// Ship all errors to error tracking
if (log.level === "error" || log.level === "critical") {
await sentry.captureException(log.error || new Error(log.message), {
tags: { source: log.source },
extra: log.data,
level: log.level,
});
}

// Ship everything to your log warehouse
await logWarehouse.ship({
timestamp: log.timestamp,
level: log.level,
message: log.message,
source: log.source,
data: log.data,
context: log.context,
});
},
});

// Filter logs by source
const databaseLogHandler = task({
id: "app.logging.database",
on: globals.events.log,
run: async (event) => {
const log = event.data;

// Only handle database-related logs
if (log.source?.includes("database")) {
await databaseMonitoring.recordMetric({
operation: log.data?.operation,
duration: log.data?.duration,
level: log.level,
});
}
},
});

Want to use Winston as your transport? No problem - integrate it seamlessly:

import winston from "winston";

// Create Winston logger, put it in a resource if used from various places.
const winstonLogger = winston.createLogger({
level: "info",
format: winston.format.combine(
winston.format.timestamp(),
winston.format.errors({ stack: true }),
winston.format.json()
),
transports: [
new winston.transports.File({ filename: "error.log", level: "error" }),
new winston.transports.File({ filename: "combined.log" }),
new winston.transports.Console({
format: winston.format.simple(),
}),
],
});

// Bridge BlueLibs logs to Winston
const winstonBridge = task({
id: "app.logging.winston",
on: globals.events.log,
run: async (event) => {
const log = event.data;

// Convert BlueLibs log to Winston format
const winstonMeta = {
source: log.source,
timestamp: log.timestamp,
data: log.data,
context: log.context,
...(log.error && { error: log.error }),
};

// Map log levels (BlueLibs -> Winston)
const levelMapping = {
trace: "silly",
debug: "debug",
info: "info",
warn: "warn",
error: "error",
critical: "error", // Winston doesn't have critical, use error
};

const winstonLevel = levelMapping[log.level] || "info";
winstonLogger.log(winstonLevel, log.message, winstonMeta);
},
});

Want to customize how logs are printed? You can override the print behavior:

// Custom logger with JSON output
class JSONLogger extends Logger {
print(log: ILog) {
console.log(
JSON.stringify(
{
timestamp: log.timestamp.toISOString(),
level: log.level.toUpperCase(),
source: log.source,
message: log.message,
data: log.data,
context: log.context,
error: log.error,
},
null,
2
)
);
}
}

// Custom logger resource
const customLogger = resource({
id: "app.logger.custom",
dependencies: { eventManager: globals.resources.eventManager },
init: async (_, { eventManager }) => {
return new JSONLogger(eventManager);
},
});

// Or you could simply add it as "globals.resources.logger" and override the default logger

Every log event contains:

interface ILog {
level: string; // The log level (trace, debug, info, etc.)
source?: string; // Where the log came from
message: any; // The main log message (can be object or string)
timestamp: Date; // When the log was created
error?: {
// Structured error information
name: string;
message: string;
stack?: string;
};
data?: Record<string, any>; // Additional structured data, it's about the log itself
context?: Record<string, any>; // Bound context from logger.with(), it's about the context in which the log was created
}
// Bad - hard to search and filter
await logger.error(`Failed to process user ${userId} order ${orderId}`);

// Good - searchable and filterable
await logger.error("Order processing failed", {
data: {
userId,
orderId,
step: "payment",
paymentMethod: "credit_card",
},
});
// Include relevant context with errors
try {
await processPayment(order);
} catch (error) {
await logger.error("Payment processing failed", {
error,
data: {
orderId: order.id,
amount: order.total,
currency: order.currency,
paymentMethod: order.paymentMethod,
attemptNumber: order.paymentAttempts,
},
});
}
// Good level usage
await logger.debug("Cache hit", { data: { key, ttl: remainingTTL } });
await logger.info("User logged in", { data: { userId, loginMethod } });
await logger.warn("Rate limit approaching", {
data: { current: 95, limit: 100 },
});
await logger.error("Database connection failed", {
error,
data: { attempt: 3 },
});
await logger.critical("System out of memory", { data: { available: "0MB" } });
// Create loggers with domain context
const paymentLogger = logger.with({ source: "payment.processor" });
const authLogger = logger.with({ source: "auth.service" });
const emailLogger = logger.with({ source: "email.service" });

// Use throughout your domain
await paymentLogger.info("Processing payment", { data: paymentData });
await authLogger.warn("Failed login attempt", { data: { email, ip } });
  1. Logging sensitive data: Never log passwords, tokens, or PII
  2. Over-logging in hot paths: Check print thresholds for expensive operations
  3. Forgetting error objects: Always include the original error when logging failures
  4. Poor log levels: Don't use error for expected conditions
  5. Missing context: Include relevant identifiers (user ID, request ID, etc.)

The Logger system is designed to be fast, flexible, and non-intrusive. Use it liberally - good logging is the difference between debugging hell and debugging heaven.

Sometimes you want to attach metadata to your tasks and resources for documentation, filtering, or middleware logic:

const apiTask = task({
id: "app.tasks.api.createUser",
meta: {
title: "Create User API",
description: "Creates a new user account",
tags: ["api", "user", "public"],
},
run: async (userData) => {
// Business logic
},
});

// Middleware that only applies to API tasks
const apiMiddleware = middleware({
id: "app.middleware.api",
run: async ({ task, next }) => {
if (task.meta?.tags?.includes("api")) {
// Apply API-specific logic
}
return next(task.input);
},
});

Sometimes you need to replace a component entirely. Maybe you're testing, maybe you're A/B testing, maybe you just changed your mind:

const productionEmailer = resource({
id: "app.emailer",
init: async () => new SMTPEmailer(),
});

const testEmailer = resource({
...productionEmailer, // Copy everything else
init: async () => new MockEmailer(), // But use a different implementation
});

const app = resource({
id: "app",
register: [productionEmailer],
overrides: [testEmailer], // This replaces the production version
});

As your app grows, you'll want consistent naming. Here's the convention that won't drive you crazy:

Type Format
Tasks {domain}.tasks.{taskName}
Listener Tasks {domain}.tasks.{taskName}.on{EventName}
Resources {domain}.resources.{resourceName}
Events {domain}.events.{eventName}
Middleware {domain}.middleware.{middlewareName}
// Helper function for consistency
function namespaced(id: string) {
return `mycompany.myapp.${id}`;
}

const userTask = task({
id: namespaced("tasks.user.create"),
// ...
});

To keep things dead simple, we avoided poluting the D.I. with this concept. Therefore, we recommend using a resource with a factory function to create instances of your classes:

const myFactory = resource({
id: "app.factories.myFactory",
init: async (config: { someOption: string }) => {
return (input: any) => {
return new MyClass(input, config.someOption);
};
},
});

const app = resource({
id: "app",
register: [myFactory],
dependencies: { myFactory },
init: async (_, { myFactory }) => {
const instance = myFactory({ someOption: "value" });
},
});

We expose the internal services for advanced use cases (but try not to use them unless you really need to):

import { globals } from "@bluelibs/runner";

const advancedTask = task({
id: "app.advanced",
dependencies: {
store: globals.resources.store,
taskRunner: globals.resources.taskRunner,
eventManager: globals.resources.eventManager,
},
run: async (_, { store, taskRunner, eventManager }) => {
// Direct access to the framework internals
// (Use with caution!)
},
});

Here's a more realistic application structure that shows everything working together:

import {
resource,
task,
event,
middleware,
index,
run,
createContext,
} from "@bluelibs/runner";

// Configuration
const config = resource({
id: "app.config",
init: async () => ({
port: parseInt(process.env.PORT || "3000"),
databaseUrl: process.env.DATABASE_URL!,
jwtSecret: process.env.JWT_SECRET!,
}),
});

// Database
const database = resource({
id: "app.database",
dependencies: { config },
init: async (_, { config }) => {
const client = new MongoClient(config.databaseUrl);
await client.connect();
return client;
},
dispose: async (client) => await client.close(),
});

// Context for request data
const RequestContext = createContext<{ userId?: string; role?: string }>(
"app.requestContext"
);

// Events
const userRegistered = event<{ userId: string; email: string }>({
id: "app.events.userRegistered",
});

// Middleware
const authMiddleware = middleware<{ requiredRole?: string }>({
id: "app.middleware.auth",
run: async ({ task, next }, deps, config) => {
const context = RequestContext.use();
if (config?.requiredRole && context.role !== config.requiredRole) {
throw new Error("Insufficient permissions");
}
return next(task.input);
},
});

// Services
const userService = resource({
id: "app.services.user",
dependencies: { database },
init: async (_, { database }) => ({
async createUser(userData: { name: string; email: string }) {
const users = database.collection("users");
const result = await users.insertOne(userData);
return { id: result.insertedId.toString(), ...userData };
},
}),
});

// Business Logic
const registerUser = task({
id: "app.tasks.registerUser",
dependencies: { userService, userRegistered },
run: async (userData, { userService, userRegistered }) => {
const user = await userService.createUser(userData);
await userRegistered({ userId: user.id, email: user.email });
return user;
},
});

const adminOnlyTask = task({
id: "app.tasks.adminOnly",
middleware: [authMiddleware.with({ requiredRole: "admin" })],
run: async () => {
return "Top secret admin data";
},
});

// Event Handlers
const sendWelcomeEmail = task({
id: "app.tasks.sendWelcomeEmail",
on: userRegistered,
run: async (event) => {
console.log(`Sending welcome email to ${event.data.email}`);
// Email sending logic here
},
});

// Group everything together
const services = index({
userService,
registerUser,
adminOnlyTask,
});

// Express server
const server = resource({
id: "app.server",
register: [config, database, services, sendWelcomeEmail],
dependencies: { config, services },
init: async (_, { config, services }) => {
const app = express();
app.use(express.json());

// Middleware to set up request context
app.use((req, res, next) => {
RequestContext.provide(
{ userId: req.headers["user-id"], role: req.headers["user-role"] },
() => next()
);
});

app.post("/register", async (req, res) => {
try {
const user = await services.registerUser(req.body);
res.json({ success: true, user });
} catch (error) {
res.status(400).json({ error: error.message });
}
});

app.get("/admin", async (req, res) => {
try {
const data = await services.adminOnlyTask();
res.json({ data });
} catch (error) {
res.status(403).json({ error: error.message });
}
});

const server = app.listen(config.port);
console.log(`Server running on port ${config.port}`);
return server;
},
dispose: async (server) => server.close(),
});

// Start the application
const { dispose } = await run(server);

// Graceful shutdown
process.on("SIGTERM", async () => {
console.log("Shutting down gracefully...");
await dispose();
process.exit(0);
});

Unit testing is straightforward because everything is explicit:

describe("registerUser task", () => {
it("should create a user and emit event", async () => {
const mockUserService = {
createUser: jest.fn().mockResolvedValue({ id: "123", name: "John" }),
};
const mockEvent = jest.fn();

const result = await registerUser.run(
{ name: "John", email: "john@example.com" },
{ userService: mockUserService, userRegistered: mockEvent }
);

expect(result.id).toBe("123");
expect(mockEvent).toHaveBeenCalledWith({
userId: "123",
email: "john@example.com",
});
});
});

Integration testing with overrides lets you test the whole system with controlled components:

const testDatabase = resource({
id: "app.database",
init: async () => new MemoryDatabase(), // In-memory test database
});

const testApp = resource({
id: "test.app",
register: [productionApp],
overrides: [testDatabase], // Replace real database with test one
});

describe("Full application", () => {
it("should handle user registration flow", async () => {
const { dispose } = await run(testApp);

// Test your application end-to-end

await dispose(); // Clean up
});
});

Ever had too many database connections competing for resources? Your connection pool under pressure? The Semaphore is here to manage concurrent operations like a professional traffic controller.

Think of it as a VIP rope at an exclusive venue. Only a limited number of operations can proceed at once. The rest wait in an orderly queue like well-behaved async functions.

import { Semaphore } from "@bluelibs/runner";

// Create a semaphore that allows max 5 concurrent operations
const dbSemaphore = new Semaphore(5);

// Basic usage - acquire and release manually
await dbSemaphore.acquire();
try {
// Do your database magic here
const result = await db.query("SELECT * FROM users");
console.log(result);
} finally {
dbSemaphore.release(); // Critical: always release to prevent bottlenecks
}

Why manage permits manually when you can let the semaphore do the heavy lifting?

// The elegant approach - automatic cleanup guaranteed!
const users = await dbSemaphore.withPermit(async () => {
return await db.query("SELECT * FROM users WHERE active = true");
});

Prevent operations from hanging indefinitely with configurable timeouts:

try {
// Wait max 5 seconds, then throw timeout error
await dbSemaphore.acquire({ timeout: 5000 });
// Your code here
} catch (error) {
console.log("Operation timed out waiting for permit");
}

// Or with withPermit
const result = await dbSemaphore.withPermit(
async () => await slowDatabaseOperation(),
{ timeout: 10000 } // 10 second timeout
);

Operations can be cancelled using AbortSignal:

const controller = new AbortController();

// Start an operation
const operationPromise = dbSemaphore.withPermit(
async () => await veryLongOperation(),
{ signal: controller.signal }
);

// Cancel the operation after 3 seconds
setTimeout(() => {
controller.abort();
}, 3000);

try {
await operationPromise;
} catch (error) {
console.log("Operation was cancelled");
}

Want to know what's happening under the hood?

// Get comprehensive metrics
const metrics = dbSemaphore.getMetrics();
console.log(`
Semaphore Status Report:
Available permits: ${metrics.availablePermits}/${metrics.maxPermits}
Operations waiting: ${metrics.waitingCount}
Utilization: ${(metrics.utilization * 100).toFixed(1)}%
Disposed: ${metrics.disposed ? "Yes" : "No"}
`);

// Quick checks
console.log(`Available permits: ${dbSemaphore.getAvailablePermits()}`);
console.log(`Queue length: ${dbSemaphore.getWaitingCount()}`);
console.log(`Is disposed: ${dbSemaphore.isDisposed()}`);

Properly dispose of semaphores when finished:

// Reject all waiting operations and prevent new ones
dbSemaphore.dispose();

// All waiting operations will be rejected with:
// Error: "Semaphore has been disposed"

The orderly guardian of chaos, the diplomatic bouncer of async operations.

The Queue class is your friendly neighborhood task coordinator. Think of it as a very polite but firm British queue-master who ensures everyone waits their turn, prevents cutting in line, and gracefully handles when it's time to close shop.

Tasks execute one after another in first-in, first-out order. No cutting, no exceptions, no drama.

Using the clever AsyncLocalStorage, our Queue can detect when a task tries to queue another task (the async equivalent of "yo dawg, I heard you like queues..."). When caught red-handed, it politely but firmly rejects with a deadlock error.

The Queue provides cooperative cancellation through the Web Standard AbortController:

  • Patient mode (default): Waits for all queued tasks to complete naturally
  • Cancel mode: Signals running tasks to abort via AbortSignal, enabling early termination
import { Queue } from "@bluelibs/runner";

const queue = new Queue();

// Queue up some work
const result = await queue.run(async (signal) => {
// Your async task here
return "Task completed";
});

// Graceful shutdown
await queue.dispose();

The Queue provides each task with an AbortSignal for cooperative cancellation. Tasks should periodically check this signal to enable early termination.

const queue = new Queue();

// Task that respects cancellation
const processLargeDataset = queue.run(async (signal) => {
const items = await fetchLargeDataset();

for (const item of items) {
// Check for cancellation before processing each item
if (signal.aborted) {
throw new Error("Operation was cancelled");
}

await processItem(item);
}

return "Dataset processed successfully";
});

// Cancel all running tasks
await queue.dispose({ cancel: true });
const queue = new Queue();

const fetchWithCancellation = queue.run(async (signal) => {
try {
// Pass the signal to fetch for automatic cancellation
const response = await fetch("https://api.example.com/data", { signal });
return await response.json();
} catch (error) {
if (error.name === "AbortError") {
console.log("Request was cancelled");
throw error;
}
throw error;
}
});

// This will cancel the fetch request if still pending
await queue.dispose({ cancel: true });
const queue = new Queue();

const processFiles = queue.run(async (signal) => {
const files = await getFileList();
const results = [];

for (let i = 0; i < files.length; i++) {
// Respect cancellation
signal.throwIfAborted();

const result = await processFile(files[i]);
results.push(result);

// Optional: Report progress
console.log(`Processed ${i + 1}/${files.length} files`);
}

return results;
});
  • tail: The promise chain that maintains FIFO execution order
  • disposed: Boolean flag indicating whether the queue accepts new tasks
  • abortController: Centralized cancellation controller that provides AbortSignal to all tasks
  • executionContext: AsyncLocalStorage-based deadlock detection mechanism
  • "Queue has been disposed": You tried to add work after closing time
  • "Dead-lock detected": A task tried to queue another task (infinite recursion prevention)
const queue = new Queue();
try {
await queue.run(task);
} finally {
await queue.dispose();
}

Tasks should regularly check the AbortSignal and respond appropriately:

// Preferred: Use signal.throwIfAborted() for immediate termination
signal.throwIfAborted();

// Alternative: Check signal.aborted for custom handling
if (signal.aborted) {
cleanup();
throw new Error("Operation cancelled");
}

Many Web APIs accept AbortSignal:

  • fetch(url, { signal })
  • setTimeout(callback, delay, { signal })
  • Custom async operations

The Queue prevents deadlocks by rejecting attempts to queue tasks from within running tasks. Structure your code to avoid this pattern.

try {
await queue.run(task);
} catch (error) {
if (error.name === "AbortError") {
// Expected cancellation, handle appropriately
return;
}
throw error; // Re-throw unexpected errors
}

Cooperative task scheduling with professional-grade cancellation support

class DatabaseManager {
private semaphore = new Semaphore(10); // Max 10 concurrent queries

async query(sql: string, params?: any[]) {
return this.semaphore.withPermit(
async () => {
const connection = await this.pool.getConnection();
try {
return await connection.query(sql, params);
} finally {
connection.release();
}
},
{ timeout: 30000 } // 30 second timeout
);
}

async shutdown() {
this.semaphore.dispose();
await this.pool.close();
}
}
class APIClient {
private rateLimiter = new Semaphore(5); // Max 5 concurrent requests

async fetchUser(id: string, signal?: AbortSignal) {
return this.rateLimiter.withPermit(
async () => {
const response = await fetch(`/api/users/${id}`, { signal });
return response.json();
},
{ signal, timeout: 10000 }
);
}
}
async function processBatch(items: any[]) {
const semaphore = new Semaphore(3); // Max 3 concurrent items
const results = [];

console.log("Starting batch processing...");

for (const [index, item] of items.entries()) {
const result = await semaphore.withPermit(async () => {
console.log(`Processing item ${index + 1}/${items.length}`);
return await processItem(item);
});

results.push(result);

// Show progress
const metrics = semaphore.getMetrics();
console.log(
`Active: ${metrics.maxPermits - metrics.availablePermits}, Waiting: ${
metrics.waitingCount
}`
);
}

semaphore.dispose();
console.log("Batch processing complete!");
return results;
}
  1. Always dispose: Clean up your semaphores when finished to prevent memory leaks
  2. Use withPermit(): It's cleaner and prevents resource leaks
  3. Set timeouts: Don't let operations hang forever
  4. Monitor metrics: Keep an eye on utilization to tune your permit count
  5. Handle errors: Timeouts and cancellations throw errors - catch them!
  • Forgetting to release: Manual acquire/release is error-prone - prefer withPermit()
  • No timeout: Operations can hang forever without timeouts
  • Ignoring disposal: Always dispose semaphores to prevent memory leaks
  • Wrong permit count: Too few = slow, too many = defeats the purpose
  • Type Safety: Full TypeScript support with intelligent inference
  • Testability: Everything is mockable and testable by design
  • Flexibility: Compose your app however you want
  • Performance: Built-in caching and optimization
  • Clarity: Explicit dependencies, no hidden magic
  • Developer Experience: Helpful error messages and clear patterns
  • Complex configuration files that require a PhD to understand
  • Decorator hell that makes your code look like a Christmas tree
  • Hidden dependencies that break when you least expect it
  • Framework lock-in that makes you feel trapped
  • Mysterious behavior at runtime that makes you question reality

Coming from Express? No problem. Coming from NestJS? We feel your pain. Coming from Spring Boot? Welcome to the light side.

The beauty of BlueLibs Runner is that you can adopt it incrementally. Start with one task, one resource, and gradually refactor your existing code. No big bang rewrites required - your sanity will thank you.

This is part of the BlueLibs ecosystem. We're not trying to reinvent everything – just the parts that were broken.

BlueLibs Runner is what happens when you take all the good ideas from modern frameworks and leave out the parts that make you want to switch careers. It's TypeScript-first, test-friendly, and actually makes sense when you read it six months later.

Give it a try. Your future self (and your team) will thank you.

P.S. - Yes, we know there are 47 other JavaScript frameworks. This one's different. (No, really, it is.)

This project is licensed under the MIT License - see the LICENSE.md file for details.