The Testing Pyramid

Agile has totally changed the focus of testing and introduced automation to support its development practices. I have been involved with software development teams for nearly thirty years. Over that time I have seen many different methods and practices come and go but testing has remained focused around manual testing. That is until Agile software development arrived.

An Agile iterative approach to software development means you have to test all your software all the time. Ensuring what you have just built has not broken a previous written piece of functionality. Agile automates testing at all layers of the application. This approach to testing is fast overtaking the traditional manual approach. Automated tests that previously existed were focused on testing the front end, the most volatile part of the application. They were also written after the application was built therefore not giving the short feedback loop we use in Agile software development. The Testing Pyramid puts a different testing emphasis on each of the application layers and focuses efforts at the beginning of the development cycle rather than at the end.

The Testing Pyramid

Looking around the web you will see various implementations of the Testing Pyramid with different names for the levels and implementing different types of tests. The one I use is the basic three level pyramid – Unit Tests, Service Tests and UI Tests. As with any pyramid, each level builds on the solidity of the level below it.

From technical view point one can look at these three levels as small, medium and large. Small tests are discreet unit tests that use mocks and stubs to collaborate with other objects and are typically written in one of the xUnit frameworks. Service level tests are the medium sized tests and interface with one other system, typically a database or a data bus. Large tests, UI tests collaborate with many sub-system parts they support the end-to-end scenarios.

Personally I look at the levels in a functional way:

  • UI Tests: Does the software work as expected by the end user?
  • Service Tests: Does the software meet the acceptance criteria on the user story?
  • Unit Tests: As a developer does the software work as I expect?

Unit Tests

Unit tests are the foundation of the Testing Pyramid. This is where the bulk of tests are written using one of the xUnit frameworks – for example, JUnit when developing in Java. They ask the question “Are we building the product right?”. When writing software I like to take a Test Driven Development (TDD) approach. TDD is a design approach to development where tests are written first and code written to support the test. There are a number of benefits to taking this approach:

  • High test coverage of the written code
  • Encourages the use of Object Orientated Analysis and Design (OOAD) techniques
  • Allows you to move forward with confidence knowing the functionality works
  • Debugging is minimised because a test takes you straight to the problem
  • Developers take more responsibility for the quality of their code
  • Because it is written to support specific tests, the code works by design not by coincidence or accident

Service Tests

I see Service Tests as supporting the acceptance criteria on the user story. They ask the question “Are we building the right product?”. When writing these tests, I like to take advantage of the Given/When/Then BDD format of the acceptance criteria and use one of the BDD test frameworks, typically Cucumber. I only adopt specification by example that is use real world examples in the acceptance criteria. This approach gives a number of benefits;

  • Assurance that all stakeholders and delivery team members understand what needs to be delivered through greater collaboration
  • The avoidance of re-work through precise specifications and therefore higher productivity

UI Tests

The User Interface is the most volatile part of the application and should not contain any business logic. For these reasons the least emphasis on automated testing should be here. That does not mean there is no testing, I like to automate key user journeys through the system using one of the UI testing frameworks e.g. WebDriver. UI testing demonstrates that all subsystems are talking to each other.

I use manual testing for the look and feel, checking the UI acts as expected and exploratory testing to find those hidden nuances.

Test Delivery

Unit tests and Service tests should be delivered in the iteration by the delivery team. As part of their development practices developers write Unit tests when building the functional code. Ideally a test first, TDD approach should be used.

My conclusion?

The Testing Pyramid inverts the traditional approach to testing. The focus is at the beginning of the development process with the developer taking responsibility for quality of their code. This is a very different way at looking at the problem from the traditional approach where code is handed over to the tester and they are assumed responsible for the code.

The early identification of defects gives two major business benefits;

  • Issues are discovered early in the development process reducing the cost of defect fixing associated with late discovery
  • Issues are identified and resolved early therefore negating the need to postpone the production release through late identification of issues


2 thoughts on “The Testing Pyramid”

  1. Hi John

    This is not the first time I have heard this and while I believe there is real value in automation when it is effectively implemented I believe there some fundamental problems with the conclusions you have drawn.

    In my view good development approaches do not just look at testing the delivered solutions and while Agile, implemented well, is making a big difference the early identification of problems is something the best projects have been doing for years. I built my first requirements checklist over 20 years ago and I’m still using an updated version of it today – it forms the core of the Requirements & Traceability masterclass and has saved clients huge sums of money/time in reduced rework and can be used in both traditional and agile projects.

    As a veteran of RAD and very successful large scale automation which was used in advanced case studies published in (at that time) the definitive book on Automation I’ve come to realise that what was being automated are not “tests” but actually “checks”. This may seem a subtle difference but the significance is much greater. Take a look at:

    I would draw your attention, in particular, to one specific point that is highlighted – “Testing encompasses checking, whereas checking cannot encompass testing”. Checks can, and in many case should be, automated; but checking by humans also has value and it has the advantage unlike an automated check that the human carrying it out has the ability to draw inferences, evaluate anomalies, investigate further, etc.

    I also agree there is a potentially a lot of value in BDD and Specification by example. Sopra actually featured as a one of the case studies in Gojko Adzic’s book – for the work done at the time in O2. The problem is that the “real” acceptance criteria are not just those specified in the user story or Cucumber / Gherkin scripts – these represent a model of what acceptance may look like but as with all models it’s worth remembering “… all models are wrong, some models are useful”. As we’re all aware not all requirements will be written down and some of those which are don not accurately describe the actual requirements – for an example see story of the 300 millisecond response time in

    For me in terms of good testing in an agile context these are some of the key things:
    • Everybody tests, not just testers
    • Three pillars
    o Automated checks used by developers
     – Good checks built by developers and implemented using appropriate frameworks relevant to the level (I like your Unit/UI/Service split)
     – These should be based on knowledge of relevant testing techniques
     – They help to maintain build quality and provide regression
     – Focussed on specific functions
    o On-going User Acceptance Test by Product Owner
     – Throughout the project using retrospectives, show and tell, end-to-end walkthroughs, etc.
     – Focusses on user happy path / norm
    o Exploratory Testing by Testers
     – To cover the gaps if you only do the first two
     – Focusses on edge cases, what if, error-handling, etc.
    o If these are all in operation then they act like posts you can throw a tight rope around – wrangling the bugs and allowing us to deal with them! If one of these is missing, or being done poorly, then the bugs just jump over the rope.

    Another key challenge is that of automation architecture. Good automation is hard. Good automation infrastructure is harder still. There is a huge overhead of poor implementation and I have seen first-hand how Cucumber / Gherkins scripts can get out of control.

    It is also work remembering that the quality of checks themselves is vital – bad checks automated are still bad. However, they become dangerous because due to the over-reliance on automation they give the impression of good ‘testing’ when in fact they are anything but. As I once said in a conference presentation on the Top 10 problems of automation – “if you automate chaos the only think you will get is less time and money to fix the real problem”. In fact the biggest (actually the 11th) problem I listed is the failure to manage expectations.

    In reality, they can quickly become a monster that needs feeding – a lot! The fundamental issue is that automation is code and that means you have to plan, resource, design, build, test and maintain it because like all code it will contain bugs and it will need to change. I have seen and heard of numerous agile teams where almost as much time, if not more, is taken up on extending and maintaining the automation rather than delivery of more or better functionality and user experience to the business. And let’s not get even get started about trying to funding it….

    As a last thought on agile testing, for now at least, I would recommend you take a look at a different take on the Agile Testing Quadrants that we often hear about. I’d be interested on your thoughts as a number of agile folk I have spoke to feel it a clearer and more appropriate picture. See




    1. Hi Graham,

      Thank you for your comments. I acknowledge that the Testing Pyramid is not a complete approach to testing and only focuses on the business logic. There are numerous other tests to be done, performance and penetration for example.

      I totally agree with your key items in Agile testing. For me Agile is all about short feedback loops and everybody taking responsibility for the product. Therefore quality checks are continuous and take place in many guises not just through formal automated or manual testing.

      With regards the Testing Pyramid vs The Agile Testing Quadrants. The key reason why I like the Testing Pyramid is because of the way it highlights Unit Tests as being the bulk of testing. The responsibility is with the developer to ensure their code works as expected rather than delegating that responsibility to the tester. That emphasis in change of responsibility is what I like.



Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.