Skip links

Packmind’s Journey in Context Engineering

A proud moment for the Packmind team 👏

In under a year, we grew from ~15% of AI-generated code in production to over 60%, all while preserving quality, stability, and maintainability.

How did we get here?

The shift was never about “more AI”, it was about productivity and quality.

Here’s how we did it:

  • Deep investment in context & guidelines for our AI agents (Copilot, Claude Code, Cursor, and Kiro)
  • Architectural discipline to limit dependencies and keep systems modular (Clean Architecture, DDD)
  • Closed feedback loops — AI output feeds back to context, with automated correction when drift is detected
  • Human oversight at critical steps and before commit


Because ultimately, context is everything: Context Gap = Impact Gap

🧠 What Is Context Engineering, & Why It’s Essential for Scaling AI Coding

Many teams plateau at prompt tuning. That’s necessary — but not sufficient — if you intend to build reliable, scalable AI-driven development systems. We think of context engineering as 3 layers:

  1. Markdown Prompt Engineering

We treat prompts like structured code: headers, lists, scoped instructions, defined segments. This turns free-form text into predictable instruction.

  1. Agent Primitives

We break down our AI logic into building blocks: agents.md, instructions.md, chatmode.md, spec.md, memory.md. These files encode standards, business constraints, memory, and orchestration patterns. Over time, your team stops writing “ad hoc prompts” and starts designing a system of primitives that adapts and refines itself.

For instance, this is a set of instructions included in our AGENTS.md for our TypeScript unit tests.

				
					## Apply when writing or reviewing test files (.spec.ts, .test.ts) across all packages

* Remove 'Arrange, Act, Assert' comments; structure tests so setup, execution, and verification are clear without labels
* Avoid 'when' in it() descriptions; move context to describe('when…'); keep it() focused on expected behavior
* Name tests with assertive, verb-first phrases; avoid starting with 'should' (e.g., it('returns null if input is empty'))
* Use stubLogger() for typed PackmindLogger; avoid hand-rolled jest.Mocked<PackmindLogger> objects
* Use expect(actualArray).toEqual(expectedArray) for deep array equality (e.g., expect(result).toEqual([1,2,3])), not length/index checks
* Clear mocks in afterEach(() => jest.clearAllMocks()), not beforeEach
* Call datasource.destroy() in afterEach when the test database is initialized in beforeEach
* Avoid asserting on logger output (messages or call counts); verify observable behavior or return values instead
* Avoid testing that a method is a function; invoke it and assert behavior instead

				
			
  1. Context Opimization

Because LLMs have limited attention windows, you can’t just throw everything in. You need to optimize which context, when, and how it’s loaded.

Techniques we lean on:

  • Session splitting
  • Modular context loading
  • Injected helper files

This ensures our AI assistants stay focused, performant, and aligned to our rules.

For instance, as we use detailed coding recipes and workflows that take more than 500 lines of text in Markdown, we include in a AGENTS.md file an index to invite Agents to go further on demand:

				
					# Packmind Recipes Index

- [Refactor Use Case to Follow IUseCase Pattern](recipes/refactor-use-case-to-follow-iusecase-pattern.md) : Refactor existing use cases to implement the IUseCase pattern for consistent command/response typing and enhanced business rule enforcement, ensuring maintainability and clarity across your application's architecture.
- [Repository Implementation and Testing Pattern](recipes/repository-implementation-and-testing-pattern.md) : Implement a standardized repository with soft delete functionality and comprehensive tests to ensure maintainable code and reliable data access patterns in the Packmind codebase.
- [Using Environment Variables with Configuration.getConfig](recipes/using-environment-variables-with-configurationgetconfig.md) : Access environment variables using `Configuration.getConfig()` to streamline configuration management across local and production environments, ensuring secure and consistent access to sensitive data.
- [Wrapping Chakra UI with Slot Components](recipes/wrapping-chakra-ui-with-slot-components.md) : "Create slot components to wrap Chakra UI primitives for enhanced custom composition and API consistency in your design system."
...

				
			

When you combine these layers, you get Agentic Workflows — pipelines where prompts, primitives, and context strategies converge to deliver dependable, repeatable output.