Transcript
  • Architectures you Always Wondered About // eMag Issue 31 - Aug 2015 1

    eMag Issue 31 - August 2015

    PRESENTATION SUMMARY

    Service Architectures at Scale

    INTERVIEW

    Eric Evans on DDD at 10

    ARTICLE Microservice trade-offs

    FACILITATING THE SPREAD OF KNOWLEDGE AND INNOVATION IN PROFESSIONAL SOFTWARE DEVELOPMENT

    Architectures you Always Wondered About

    Lessons learnt from adopting Microservices at eBay, Google, Gilt, Hailo and nearForm

  • FOLLOW US CONTACT US

    Martin Fowler on Microservice trade-offsMany development teams have found the microservices architectural style to be a superior approach to a monolithic architecture. But other teams have found them to be a productivity-sapping bur-den. Like any architectural style, micros-ervices bring costs and benefits. To make a sensible choice you have to understand these and apply them to your specific context.

    Eric Evans on the interplay of Domain-Driven Design, microservices, event-sourcing, and CQRSThe interview covers an introduction to DDD, how the communitys under-standing of DDD has changed in the last 10 years, strategic design, how to use DDD to design microservices, and the connection between microservices and the DDD bounded context.

    Lessons Learned Adopting Microservices at Gilt, Hailo and nearFormThis article contains an extensive interview on the mi-croservices adoption process, the technologies used, the benefits and difficulties of implementing microservices, with representatives from Gilt, Hailo and nearForm.

    Building a Modern Microservices Architecture at Gilt

    After living with microservices for three years, Gilt can see advan-tages in team ownership, boundaries defined by APIs, and complex problems broken down into small ones, Yoni Goldberg explained in a presentation at the QCon London 2015 conference. Challenges still exist in tooling, integration environments, and monitoring.

    Evolutionary ArchitectureRandy Shoup talks about designing and building micros-ervices based on his experience of working at large compa-nies, such as Google and eBay. Topics covered include the real impact of Conways law, how to decide when to move to a microservice-based architecture, organizing team structure around microservices, and where to focus on the standardization of technology and process.

    Service Architectures at Scale: Lessons from Google and eBayRandy Shoup discusses modern service architectures at scale, using specif-ic examples from both Google and eBay. He covers some interesting lessons learned in building and operating these sites. He concludes with a number of experience-based recommendations for other smaller organizations evolving to -- and sustaining -- an effective service ecosystem.

    GENERAL FEEDBACK [email protected]

    ADVERTISING [email protected]

    EDITORIAL [email protected]

    /InfoQ@InfoQ google.com

    /+InfoQlinkedin.com

    company/infoq

  • Architectures you Always Wondered About // eMag Issue 31 - Aug 20154

    A LETTER FROM THE EDITOR

    This eMag has had an unusual history. When we started to plan it the intent had been to look at the different architectural styles of a number of the well known Silicon Valley firms. As we start-ed to work on it though it become apparent that nearly all of them had, at some level, converged towards the same architectural style - one based on microservices, with DevOps and some sort of agile (in the broadest sense) management ap-proach.

    According to ThoughtWorks Chief Scientist Martin Fowler the term microservice was dis-cussed at a workshop of software architects near Venice in May 2011, to describe what the partic-ipants saw as a common architectural style that many of them had begun exploring recently. In May 2012, the same group decided on microser-vices as the most appropriate name.

    When we first started talking about the mi-croservices architectural style at InfoQ in 2013, I think many of us assumed that its inherent oper-ational complexity would prevent the approach being widely adopted particularly quickly. Yet a mere three years on from the term being coined it has become one of the most commonly cited approaches for solving large-scale horizontal scaling problems, and most large web sites in-cluding Amazon and eBay have evolved from a

    monolithic architecture to a microservices one. Moreover the style has spread far beyond its Bay Area roots, seeing widespread adoption in many organisations.

    In this eMag we take a look at the state of the art in both theory and practice.

    Martin Fowler provides a clear and concise summary of the trade-offs involved when choos-ing to work with the style.

    Eric Evans talks about the interplay of Do-main-Driven Design, microservices, event-sourc-ing, and CQRS.

    Randy Shoup describes experiences of working with microservices from his time at eBay and Google. He focuses on the common evolu-tionary path from monoliths to microservices and paints a picture of a mature services envi-ronment at Google. In a follow-up interview he elaborates on some of the lessons from this ex-perience.

    Then Abel Avram speaks to three com-panies - Gilt, Hailo and nearForm - about their experiences covering both building a microser-vices platform from scratch and re-architecting a monolithic platform by gradually introducing mi-croservices. In the follow-up presentation sum-mary we take a more detailed look at Gilt.

    took over as head of the editorial team at InfoQ.com in March 2014,

    guiding content creation including news, articles, books, video presentations, and interviews. Prior to taking on the full-time role at InfoQ, Charles led InfoQs Java coverage, and was CTO for PRPi Consulting, a remuneration research rm that was acquired by PwC in July 2012. For PRPi, he had overall responsibility for the development of all the custom software used within the company. He has worked in enterprise software for around 20 years as a developer, architect, and development manager.

    CHARLES HUMBLE

  • Architectures you Always Wondered About // eMag Issue 31 - Aug 2015 5

    Microservice trade-offs by Martin Fowler

    Read on martinfowler.com

    Many development teams have found the microservices architectural style to be a superior approach to a monolithic architecture. But other teams have found them to be a productivity-sapping burden. Like any architectural style, microservices bring costs and benefits. To make a sensible choice you have to understand these and apply them to your specific context.

    Martin Fowler is an author, speaker, and general loud-mouth on software development. Hes long been puzzled by the problem of how to componentize software systems, having heard more vague claims than hes happy with. He hopes that microservices will live up to the early promise its advocates have found.

    Strong Module Boundaries: Microservices reinforce mod-ular structure, which is par-ticularly important for larger teams.

    Microservices provide benefits

    Independent Deployment: Simple services are easier to deploy, and since they are autonomous, are less like-ly to cause system failures when they go wrong.

    Technology Diversity: With micro-services you can mix multiple lan-guages, development frameworks and data-storage technologies.

  • Architectures you Always Wondered About // eMag Issue 31 - Aug 20156

    Distribution: Distributed sys-tems are harder to program, since remote calls are slow and are always at risk of fail-ure.

    but come with costs

    Eventual Consistency: Maintain-ing strong consistency is extremely difficult for a distributed system, which means everyone has to man-age eventual consistency.

    Operational Complexity: You need a mature operations team to manage lots of services, which are being redeployed regularly.

    Strong Module BoundariesThe first big benefit of microser-vices is strong module boundar-ies. This is an important benefit yet a strange one, because there is no reason, in theory, why a mi-croservices should have stron-ger module boundaries than a monolith.

    So what do I mean by a strong module boundary? I think most people would agree that its good to divide up software into modules: chunks of software that are decoupled from each other. You want your modules to work so that if I need to change part of a system, most of the time I only need to understand a small part of that system to make the change, and I can find that small part pretty easily. Good modular structure is useful in any pro-gram, but becomes exponential-ly more important as the soft-ware grows in size. Perhaps more importantly, it grows more in im-portance as the team developing it grows in size.

    Advocates of microservices are quick to introduce Conways Law, the notion that the struc-ture of a software system mirrors the communication structure of the organization that built it. With larger teams, particularly if these teams are based in differ-ent locations, its important to structure the software to recog-nize that inter-team communi-cations will be less frequent and more formal than those within a team. Microservices allow each team to look after relatively inde-

    pendent units with that kind of communication pattern.

    As I said earlier, theres no reason why a monolithic system shouldnt have a good modular structure. [1] But many people have observed that it seems rare, hence theBig Ball of Mudis most common architectural pattern. Indeed this frustration with the common fate of monoliths is whats driven several teams to microservices. The decoupling with modules works because the module boundaries are a barrier to references between modules. The trouble is that, with a mono-lithic system, its usually pretty easy to sneak around the barrier. Doing this can be a useful tacti-cal shortcut to getting features built quickly, but done widely they undermine the modular structure and trash the teams productivity. Putting the mod-ules into separate services makes the boundaries firmer, making it much harder to find these can-cerous workarounds.

    An important aspect of this coupling is persistent data. One of the key characteristics of mi-croservices isDecentralized Data Management, which says that each service manages its own database and any other service must go through the services API to get at it. This eliminatesIn-tegration Databases, which are a major source of nasty coupling in larger systems.

    Its important to stress that its perfectly possible to have firm module boundaries with a monolith, but it requires disci-

    pline. Similarly you can get a Big Ball of Microservice Mud, but it requires more effort to do the wrong thing. The way I look at, using microservices increases the probability that youll get better modularity. If youre confident in your teams discipline, then that probably eliminates that advan-tage, but as a team grows it gets progressively harder to keep dis-ciplined, just as it becomes more important to maintain module boundaries.

    This advantage becomes a handicap if you dont get your boundaries right. This is one of the two main reasons for aMono-lith First strategy, and why even those more inclined to run with microservices early stress that you can only do so with a well understood domain.

    But Im not done with ca-veats on this point yet. You can only really tell how well a system has maintained modularity after time has passed. So we can only really assess whether microser-vices lead to better modularity once we see microservice sys-tems that have been around for at least a few years. Furthermore, early adopters tend to be more talented, so theres a further delay before we can assess the modularity advantages of micro-service systems written by aver-age teams. Even then, we have to accept that average teams write average software, so rather than compare the results to top teams we have to compare the result-ing software to what it would have been under a monolithic

  • Architectures you Always Wondered About // eMag Issue 31 - Aug 2015 7

    architecture - which is a tricky counter-factual to assess.

    All I can go on for the mo-ment is the early evidence I have hear from people I know who have been using this style. Their judgement is that it is significant-ly easier to maintain their mod-ules.

    One case study was partic-ularly interesting. The team had made the wrong choice, using microservices on a system that wasnt complex enough to cov-er theMicroservice Premium. The project got in trouble and needed to be rescued, so lots more people were thrown onto the project. At this point the mi-croservice architecture became helpful, because the system was able to absorb the rapid influx of developers and the team was able to leverage the larger team numbers much more easily than is typical with a monolith. As a result the project accelerated to a productivity greater than would have been expected with a monolith, enabling the team to catch up. The result was still a net negative, in that the soft-ware cost more staff-hours than it would have done if they had gone with a monolith, but the microservices architecture did support ramp up.

    DistributionSo microservices use a distribut-ed system to improve modulari-ty. But distributed software has a major disadvantage, the fact that its distributed. As soon as you play the distribution card, you incur a whole host of complex-ities. I dont think the microser-vice community is as naive about these costs as the distributed objects movement was, but the complexities still remain.

    The first of these is perfor-mance. You have to be in a really unusual spot to see in-process function calls turn into a perfor-mance hot spot these days, but

    remote calls are slow. If your ser-vice calls half-a-dozen remote services, each which calls anoth-er half-a-dozen remote services, these response times add up to some horrible latency character-istics.

    Of course you can do a great deal to mitigate this prob-lem. Firstly you can increase the granularity of your calls, so you make fewer of them. This compli-cates your programming model, you now have to think of how to batch up your inter-service inter-actions. It will also only get you so far, as you are going to have to call each collaborating service at least once.

    The second mitigation is to use asynchrony. If make six asyn-chronous calls in parallel youre now only as slow as the slowest call instead of the sum of their la-tencies. This can be a big perfor-mance gain, but comes at anoth-er cognitive cost. Asynchronous programming is hard: hard to get right, and much harder to debug. But most microservice stories Ive heard need asynchrony in order to get acceptable performance.

    Right after speed is reliabil-ity. You expect in-process func-tion calls to work, but a remote call can fail at any time. With lots of microservices, theres even more potential failure points. Wise developers know this and design for failure. Happily the kinds of tactics you need for asynchronous collaboration also fit well with handling failure and the result can improve resiliency. Thats not much compensation however, you still have the extra complexity of figuring out the consequences of failure for every remote call.

    And thats just the top twoFallacies of Distributed Com-puting.

    There are some caveats to this problem. Firstly many of these issues crop up with a monolith as it grows. Few mono-

    liths are truly self-contained, usu-ally there are other systems, of-ten legacy systems, to work with. Interacting with them involves going over the network and run-ning into these same problems. This is why many people are in-clined to move more quickly to microservices to handle the in-teraction with remote systems. This issue is also one where expe-rience helps, a more skillful team will be better able to deal with the problems of distribution.

    But distribution is always a cost. Im always reluctant to play the distribution card, and think too many people go distributed too quickly because they under-estimate the problems.

    Eventual ConsistencyIm sure you know websites that need a little patience. You make an update to something, it re-freshes your screen and the up-date is missing. You wait a minute or two, hit refresh, and there it is.

    This is a very irritating us-ability problem, and is almost certainly due to the perils of eventual consistency. Your up-date was received by the pink node, but your get request was handled by the green node. Until the green node gets its update from pink, youre stuck in an in-consistency window. Eventual-ly it will be consistent, but until then youre wondering if some-thing has gone wrong.

    Inconsistencies like this are irritating enough, but they can be much more serious. Business logic can end up making deci-sions on inconsistent informa-tion, when this happens it can be extremely hard to diagnose what went wrong because any investigation will occur long af-ter the inconsistency window has closed.

    Microservices introduce eventual consistency issues be-cause of their laudable insistence

  • Architectures you Always Wondered About // eMag Issue 31 - Aug 20158

    Microservices are the first post

    DevOps revolution architecture

    - Neal Ford

    on decentralized data manage-ment. With a monolith, you can update a bunch of things to-gether in a single transaction. Microservices require multiple resources to update, and distrib-uted transactions are frowned on (for good reason). So now, devel-opers need to be aware of consis-tency issues, and figure out how to detect when things are out of sync before doing anything the code will regret.

    The monolithic world isnt free from these problems. As sys-tems grow, theres more of a need to use caching to improve per-formance, and cache invalidation is the other Hard Problem. Most applications needoffline locksto avoid long-lived database trans-actions. External systems need updates that cannot be coordi-nated with a transaction manag-er. Business processes are often more tolerant of inconsistencies than you think, because busi-nesses often prize availability more (business processes have long had an instinctive under-standing of theCAP theorem).

    So like with other distribut-ed issues, monoliths dont entire-ly avoid inconsistency problems, but they do suffer from them much less, particularly when they are smaller.

    Independent DeploymentThe trade-offs between modular boundaries and the complexities of distributed systems have been around for my whole career in this business. But one thing thats changed noticeably, just in the last decade, is the role of releas-ing to production. In the twenti-eth century production releases were almost universally a painful and rare event, with day/night weekend shifts to get some awk-ward piece of software to where it could do something useful. But these days, skillful teams release frequently to production, many

    organizations practicing Con-tinuous Delivery, allowing them to do production releases many times a day.

    This shift has had a pro-found effect on the software industry, and it is deeply inter-twined with the microservice movement. Several microser-vice efforts were triggered by the difficulty of deploying large monoliths, where a small change in part of the monolith could cause the whole deployment to fail. A key principle of microser-vices is thatservices are compo-nentsand thus are independent-ly deployable. So now when you make a change, you only have to test and deploy a small service. If you mess it up, you wont bring down the entire system. After all, due the need to design for fail-ure, even a complete failure of your component shouldnt stop other parts of the system from working, albeit with some form of graceful degradation.

    This relationship is a two-way street. With many micros-ervices needing to deploy fre-quently, its essential you have your deployment act together. Thats why rapid application de-ployment and rapid provisioning of infrastructure areMicroservice Prerequisites. For anything be-yond the basics, you need to be doing continuous delivery.

    The great benefit of contin-uous delivery is the reduction in cycle-time between an idea and running software. Organizations that do this can respond quickly to market changes, and intro-duce new features faster than their competition.

    Although many people cite continuous delivery as a reason to use microservices, its essen-tial to mention that even large monoliths can be delivered con-tinuously too. Facebook and Etsy are the two best known cases. There are also plenty of cases where attempted microservices

  • Architectures you Always Wondered About // eMag Issue 31 - Aug 2015 9

    architectures fail at independent deployment, where multiple ser-vices need their releases to be carefully coordinated[2]. While I do hear plenty of people arguing that its much easier to do con-tinuous delivery with microser-vices, Im less convinced of this than their practical importance for modularity - although natu-rally modularity does correlate strongly with delivery speed.

    Operational ComplexityBeing able to swiftly deploy small independent units is a great boon for development, but it puts additional strain on oper-ations as half-a-dozen applica-tions now turn into hundreds of little microservices. Many organi-zations will find the difficulty of handling such a swarm of rapidly changing tools to be prohibitive.

    This reinforces the import-ant role of continuous delivery. While continuous delivery is a valuable skill for monoliths, one thats almost always worth the effort to get, it becomes essential for a serious microservices set-up. Theres just no way to handle dozens of services without the automation and collaboration that continuous delivery fosters. Operational complexity is also increased due to the increased demands on managing these services and monitoring. Again a level of maturity that is useful for monolithic applications be-comes necessary if microservices are in the mix.

    Microservice proponents like to point out that since each service is smaller its easier to understand. But the danger is that complexity isnt eliminat-ed, its merely shifted around to the interconnections between services. This can then surface as increased operational com-plexity, such as the difficulties in debugging behavior that spans services. Good choices of service

    boundaries will reduce this prob-lem, but boundaries in the wrong place makes it much worse.

    Handling this operation-al complexity requires a host of new skills and tools - with the greatest emphasis being on the skills. Tooling is still immature, but my instinct tells me that even with better tooling, the low bar for skill is higher in a microser-vice environment.

    Yet this need for better skills and tooling isnt the hardest part of handling these operational complexities. To do all this effec-tively you also need to introduce a devops culture: greater col-laboration between developers, operations, and everyone else involved in software delivery. Cultural change is difficult, es-pecially in larger and older orga-nizations. If you dont make this up-skilling and cultural change, your monolithic applications will be hampered, but your microser-vice applications will be trauma-tized.

    Technology DiversitySince each microservice is an independently deployable unit, you have considerable freedom in your technology choices with-in it. Microservices can be written in different languages, use differ-ent libraries, and use different data stores. This allows teams to choose an appropriate tool for the job, some languages and li-braries are better suited for cer-tain kinds of problems.

    Discussion of technical di-versity often centers on best tool for the job, but often the biggest benefit of microservices is the more prosaic issue of versioning. In a monolith you can only use a single version of a library, a situa-tion that often leads to problem-atic upgrades. One part of the system may require an upgrade to use its new features but can-not because the upgrade breaks

    another part of the system. Deal-ing with library versioning issues is one of those problems that gets exponentially harder as the code base gets bigger.

    There is a danger here that there is so much technology di-versity that the development organization can get over-whelmed. Most organizations I know do encourage a limited set of technologies. This encourage-ment is supported by supplying common tools for such things as monitoring that make it easier for services to stick to a small portfo-lio of common environments.

    Dont underestimate the value of supporting experimen-tation. With a monolithic system, early decisions on languages and frameworks are difficult to re-verse. After a decade or so such decisions can lock teams into awkward technologies. Microser-vices allow teams to experiment with new tools, and also to grad-ually migrate systems one ser-vice at a time should a superior technology become relevant.

    Secondary FactorsI see the items as above as the primary trade-offs to think about. Heres a couple more things that come up that I think are less im-portant.

    Microservice proponents often say that services are easier to scale, since if one service gets a lot of load you can scale just it, rather than the entire applica-tion. However Im struggling to recall a decent experience report that convinced me that it was actually more efficient to do this selective scaling compared to doing cookie-cutter scaling by copying the full application.

    Microservices allow you to separate sensitive data and add more careful security to that data. Furthermore by ensuring all traffic between microservices is secured, a microservices ap-proach could make it harder to

  • Architectures you Always Wondered About // eMag Issue 31 - Aug 201510

    exploit a break-in. As security issues grow in importance, this could migrate to becoming a ma-jor consideration for using micro-services. Even without that, its a not unusual for primarily mono-lithic systems to create separate services to handle sensitive data.

    Critics of microservices talk about the greater difficulty in testing a microservices applica-tion than a monolith. While this is a true difficulty - part of the greater complexity of a distrib-uted application - there aregood approaches to testing with mi-croservices. The most important thing here is to have the disci-pline to take testing seriously, compared to that the differences between testing monoliths and testing microservices are sec-ondary.

    Summing UpAny general post on any ar-chitectural style suffers from the Limitations Of General Ad-vice. So reading a post like this cant lay out the decision for you, but such articles can help ensure you consider the various factors that you should take into account. Each cost and benefit here will have a different weight

    for different systems, even swap-ping between cost and benefit (strong module boundaries are good in more complex systems, but a handicap to simple ones) Any decision you make depends on applying such criteria to your context, assessing which factors matter most for your system and how they impact your particular context. Furthermore, our expe-rience of microservice architec-tures is relatively limited. You can usually only judge architectural decisions after a system has ma-tured and youve learned what its like to work with years after development began. We dont have many anecdotes yet about long-lived microservice architec-tures.

    Monoliths and microser-vices are not a simple binary choice. Both are fuzzy defini-tions that mean many systems would lie in a blurred boundary area. Theres also other systems that dont fit into either catego-ry. Most people, including my-self, talk about microservices in contrast to monoliths because it makes sense to contrast them with the more common style, but we must remember that there are systems out there that dont

    It's important to stress that it's

    perfectly possible to have firm module

    boundaries with a monolith, but it

    requires discipline. Similarly you can

    get a Big Ball of Microservice Mud,

    but it requires more effort to do the

    wrong thing

  • Architectures you Always Wondered About // eMag Issue 31 - Aug 2015 11

    For more information about microservices, start with my Microservice Resource Guide, where Ive selected the best information on the what, when, how, and who of microservices.

    Sam Newmans book Is the key resource if you want to find out more about how to build a microservice sys-tem.

    fit comfortably into either cat-egory. I think of monoliths and microservcies as two regions in the space of architectures. They are worth naming because they have interesting characteristics that are useful to discuss, but no sensible architect treats them as a comprehensive partitioning of the architectural space.

    That said, one general sum-mary point that seems to be widely accepted is there is aMi-croservice Premium: microser-vices impose a cost on produc-tivity that can only be made up for in more complex systems. So if you can manage your systems complexity with a monolithic ar-chitecture then you shouldnt be using microservices.

    But the volume of the mi-croservices conversation should not let us forget the more im-portant issues that drive the success and failure of software projects. Soft factors such as the quality of people on the team, how well they collaborate with each other, and the degree of communication with domain ex-perts, will have a bigger impact than whether to use microser-vices or not. On a purely techni-cal level, its more important to focus on things like clean code, good testing, and attention to evolutionary architecture.

    Footnotes1:Some people consider mono-lith to be an insult, always im-plying poor modular structure. Most people in the microservices world dont do this, they define monolith purely as an applica-tion built as a single unit. Cer-tainly microservices-advocates believe that most monoliths end up being Big Balls of Mud, but I dont know any who would ar-gue that its impossible to built a well-structured monolith.

    2:The ability to deploy ser-vices independently ispart of the definition of microservices. So its

    reasonable to say that a suite of services that must have its de-ployments coordinated is not a microservice architecture. It is also reasonable to say that many teams that attempt a microser-vice architecture get into trouble because they end up having to coordinate service deployments.

    Further Reading

    Sam Newman gives his list of the benefits of microservices inChapter 1 of his book(the essential source for details of building a microservices sys-tem).

    Benjamin Woottons post, Microservices - Not A Free Lunch! on High Scalea-bility, was one of the earliest, and best, summaries of the downsides of using micros-ervices.

    AcknowledgementsBrian Mason, Chris Ford, Rebecca Parsons, Rob Miles, Scott Robin-son, Stefan Tilkov, Steven Lowe, and Unmesh Joshi discussed drafts of this article with me.

  • Architectures you Always Wondered About // eMag Issue 31 - Aug 201512

    Eric Evans on Domain-Driven Design at 10 Years

    Listen on SE Radio

    The show will be about Do-main-Driven Design at 10; so its already 10 years that you came up with the idea. Some of the listeners might not be that familiar with domain-driven design, so Eric can you give us a short introduction about do-main-driven design, what it is and how it is special?

    In its essence, domain-driven de-sign is a way of using models for

    creating software, especially the part of the software that handles complex business requirements into such behavior.

    So the particular way in do-main-driven design, the thing that we focus on, is that we want a language where we can really crisply, concisely describe any situation in the domain and de-scribe how were going to solve it or what kind of calculations we need to do. That language would be shared between business peo-

    ple, specialists in that domain, as well as software people who will be writing the software, and that we call the ubiquitous language because it runs through that whole process.

    We dont do as most proj-ects do. We dont talk to the business people sort of on their terms and then go and have very technical conversations about how the software will work, sep-arately. We try to bring those conversations together to create

    Eric Evans is the author of Domain-Driven Design: Tackling Complexity in Software. Eric now leads Domain Language, a consulting group which coaches and trains teams applying domain-driven design, helping them to make their development work more productive and more valua-ble to their business.

    Eberhard Wolff works as a freelance consultant, architect and trainer in Germany. He is cur-rently interested in Continuous Delivery and technologies such as NoSQL and Java. He is the au-thor of several books and articles and regularly speaks at national and international confer-ences.

    THE INTERVIEWEE

    THE INTERVIEWER

  • Architectures you Always Wondered About // eMag Issue 31 - Aug 2015 13

    this conceptual model with very clear language. And thats a very difficult thing to do and it cant be done in a global sense. You cant come up with the model for your entire organization. At-tempts to do that are very coun-terproductive.

    So the other ingredient in that is what we call the bound-ed context. So I have a clear boundary, perhaps a particular application that were working on. Within this boundary, we say this is what my words mean, this is the relationship between the concepts. Everything is clear and we work hard to make it clear. But then outside that boundary, all sorts of different rules apply in some parts of the system, per-haps no rules really apply.

    Out of that comes the rest of domain-driven design, but thats really the essence of it. Its a particular way of dealing with these complex parts of our sys-tems.

    And I guess that is also why so many people are interested in that because thats really what a lot of software engineers do. Can you give an example of such a bounded context and how models might be different there? Because I think thats one of the very interesting parts of DDD; at least it was for me.

    I can give some examples. One common thing is that different parts of an organization might deal with the domain in a very different way. And there may even already be software, there probably is already software, that deals with those different parts.

    So take some company that does e-commerce. So theres a part of the software where were taking orders. So we are very fo-cused on what kind of items are

    in the inventory and how much they cost and how do we collect these items together into some kind of a shopping cart? And then eventually the order is cre-ated and then theres payment and all of those concerns.

    But then in shipping, per-haps theyre not really that in-terested in most of those issues. What they care about an item is what kind of box will it fit into, and how much does it weigh, and which kind of shipping did you pay for, and do we ship it all in one box, or this ones out of stock so were going to go ahead and ship the part weve got and then send the rest later, and how do we keep track of an order thats been partially shipped but part of its still waiting?

    Although in theory you could create a single model that would represent all these dif-ferent aspects, in practice thats not what people usually do and it works better to separate those two contexts and say, well we basically have a shipping system here and an order taking system and perhaps other things too - Im not saying it would just be those two. You could create concepts so general and versatile that you could handle all these cases. But were usually better off with more specialized models: a model that handles really well the idea of an order as a set of physical objects that fit into certain kinds of box-es and that you may or may not have available at this time; and another one that says, well, here are items that youve chosen and here are similar items; and just totally different issues.

    I think that thats a very good short introduction to DDD, and in particular the bounded contexts is I think really one of the interesting things here, as you said, where you would

    have totally different, lets say, perspectives on items wheth-er youre working on shipping or ordering things. So look-ing back on those ten years of DDD, what was the impact of DDD in your opinion?

    Well, its very hard for me to see that. I mean I do have a sense its had an impact and sometimes people tell me that it had a big influence on them. I think the things that made me feel best is occasionally someone says it sort of brought back the fun of soft-ware development for them or made them really enjoy software development again, and that particularly makes me feel good. Its so hard for me to judge what the overall impact is. I really dont know.

    Its always good if you can bring back the joy in work again. To me the impact of the book is quite huge. I would even call it a movement, sort of a DDD movement. Do you think thats true? Would you call it a move-ment or is it something differ-ent to you?

    I think that probably that was my intention, that I had the sense that I wanted to start a movement, maybe a very diffuse movement but it would be nice to think it had a little of that qual-ity to it.

    One of the things that I real-ly emphasize, and this part had a crusading quality to it, that when were working on software, we need to keep a really sharp focus on the underlying domain in the business. And we shouldnt look at our jobs as just technologists but really our job is -- and this is a difficult job -- as this person who can start to penetrate into the complexity and tangle of these domains and start to sift

  • Architectures you Always Wondered About // eMag Issue 31 - Aug 201514

    that apart in a way that allows nice software to be written. And sometimes when you really do it well, the problem becomes sig-nificantly easier on a technical level.

    Yeah. So I think its about, lets say, model mining or knowl-edge mining what a lot of our job is about.

    Yes.

    Was the impact of your book and the movement, was it dif-ferent from what you have ex-pected? Are there any surpris-es?

    Well, one pleasant surprise is that it hasnt just kind of faded. Most books go through a fairly short cycle, they become very popular for three or four years and then they kind of drop into the back-ground. And DDD has stayed a pretty relevant topic for a long time now; really like 11 years since its now 2015. So that gen-uinely surprises me actually, but obviously in a good way.

    And I couldnt agree more. To me its almost like a time-less thing that youve created there. What part of the DDD movement did you learn the most from? What gave you the biggest progress in your own knowledge and skillset?

    Oh, I think that -- and this I think is closely related to why peo-ple still pay attention to DDD -- that DDD has not been static so the core concepts of it have been very stable - lets say I can express them better now. But the people who are really doing

    it, there have been a couple of big shifts in how people tend to build these systems.

    So the first big one was when event sourcing and CQRS came on the scene and the way that we went about design-ing and building a DDD system changed a lot. When did that happen? That was maybe 2007. Anyway, after a few years, and it may be that just about the time that we would have been follow-ing that natural cycle of whats new and moving on to that thing, DDD kind of got a big ren-ovation. I learned a tremendous amount from those people, from Greg Young, Udi Dahan, and the others who went around and around about this thing in CQRS and just really shook things up.

    My way of thinking about things changed a lot and I think the way most people think about DDD now is significantly differ-ent because of that. There have been a few other things but that was certainly the first big one.

    Do you think there are any cir-cumstances where a DDD ap-proach would fail? And how would you deal with them or is it something that can be made to work in any project?

    So there are a few aspects to that. Thats an interesting question because certainly DDD projects fail all the time. Its not unusual. Of course, some of that is just anything difficult fails sometimes so we neednt worry about that. And I think DDD is hard. So what would make a DDD project more likely to fail than other times? I think that some of the most com-mon things are there is a ten-dency to slip into perfectionism: whenever people are serious about modeling and design, they start slipping toward perfection-ism. Other people start slipping

    toward very broad scope: we will model the whole thing - even if we have five different bounded contexts; but well model each one with great elegance and all at the same time.

    So some projects fail be-cause they get too ambitious. They dont walk before they run. Some of them, because they are in an area where the strategy of the organization isnt very clear. Let me put it this way: the great-est value of DDD comes when youre very close to the bone of whats strategically important within the company. Thats when it pays off the most. You need a certain degree of complexity, in-tricacy in the problem or at least some fuzziness or else theres no point to all that thinking. But also, you need to be in some strategically relevant thing. But along with that goes a certain amount of the rough and tumble of the politics of organizations. Some organizations change their strategies.

    Ive seen all these things happen. There are all sorts of other things. Of course, some-times the skills of the team are the problem. So you might have a situation where they get off to a good start and then their exe-cution isnt very good. Of course, bad programming will under-mine any software approach. Ultimately, the code has to be good. Well, the code has to be competent. That one hurts a lot of projects.

    Of course, since I men-tioned bounded context earli-er and I want to underline how fundamental that is, its not an easy discipline. If weve estab-lished that the shipping context and the order taking context are separate and so weve made a boundary between them, there is some kind of translation lay-er there. But some programmer in the shipping context needs a piece of data, and that data is

  • Architectures you Always Wondered About // eMag Issue 31 - Aug 2015 15

    over in the order taking context. Now, what does he do, does he just say, Well, I can just write a query on the order taking data-base. That will just take me 15 minutes. Or I could go and talk to people about how are we go-ing to enhance the interface and translator between those two contexts and then how would this piece of information be modeled within our context, and then Ill do my feature.

    So it takes a certain level of discipline to maintain that boundary. And when the bound-ary is gone, the two models start clashing and it all goes down the tubes pretty fast. Another vari-ation on this of course is that if you have a lot of legacy systems and youre trying to do some work. So ideally, you do that by isolating the new work within a bounded context and talking to the legacy through some kind of translator.

    Anyway, I could go on but I think its not too surprising that some of them fail.

    I agree. As you said, I guess DDD only makes sense if you have a certain complexity and with that comes risk but also by the potential value of the software I guess. What I found interesting about what you just said is that people get overambitious in some points and try to reach for a perfect state. To me that is natural for a lot of technical people. So I was wondering whether you have any secret sauce how to focus the ambition on those parts of the system where its really important, and to live with not so great quality in the other parts, and stop your am-bition there? Is there any tip that you have for that?

    I think that you summed it up very well then. Thats what you need to do and you need to have a boundary between the two. Theres quite a bit in DDD about part of the strategic de-sign part is how to decide which parts are which: like theres a general category of generic subdomains where we say, well, theres nothing all that special; were not creating something here that we want to innovate; this is something we want to do in a standard way. In fact, the ideal solution here would be lets not even write any software; lets see if we can get it off-the-shelf.

    Then theres lots of stuff that just keeps the lights on; that whether you have some brilliant insight or not isnt going to really change the outcome very much. And then there are those lever-age points. This is the general thing but its very hard, I admit, because first of all, its hard to know. Youll often get it wrong and youll often choose a topic which may turn out not to have been very core of the strategic value. But still I think theres a lot of value in trying.

    Another thing is the perfec-tionism, because even if you got zeroed in on a certain part that was strategically valuable, per-fectionism can still kill you. You have to deliver and you have to deliver fairly quickly. In fact, DDD depends on iteration. We as-sume that you dont understand the problem very well at the be-ginning, that youll get it wrong the first time. And so its essential to get the first one done quick and then get on to the second it-eration and get that done quick too, because youll probably get that wrong too. And then get on to the third one which is where youre probably going to have some clue by then and that third one might be fairly good. And if you can get it done fairly quick, then youll have time to do the

    fourth one which is going to be really elegant.

    And Im serious. I think that its a weird paradox but perfec-tionism prevents people from creating really elegant designs, because it slows them down in the early iterations so they dont get to do enough iterations. Multiple iterations, I mean itera-tion as in doing the same thing over, not iteration as when peo-ple really are talking about in-crements where Lets do a little iterative requirement at a time. I mean doing the same feature set then redoing the same fea-ture set again but with a nicer design, with a new insight into what it means. Thats the key: move quick.

    That sounds pretty interesting to focus on the number of re-iterations instead of reaching for a perfect solution at the very beginning. One thing Im really wondering is if you ever plan to update the book, is there anything you would like to change in the book?

    Im pretty sure Im not going to update the book. I might write something new at some point. But anyway, I think I probably wont change the book. But if I did or rather if I were going to try and explain DDD better, cer-tainly one thing that I have real-ized for a long time is that the concept of the bounded context is much too late in the book. All the strategic design stuff is way back at the back. I treat it as an advanced topic. Theres some logic to that but the trouble is that its so far back that most people never get to it really. So I would at least weave that in to the first couple of chapters. The ubiquitous language is in chapter 2 so thats all right. But

  • Architectures you Always Wondered About // eMag Issue 31 - Aug 201516

    I would have bounded context there in chapter 2 or three.

    Another thing I would do is try to change the presentation of the building blocks. The build-ing blocks are things like the en-tities and value objects and lay-er domain events and stuff like that. They are important things but there is a big chunk of that stuff right in the early middle part. Most people dont get past it and they come away thinking that thats really the core of DDD, whereas in fact its really not. Its an important part just because it helps people bridge from the conceptual modeling of DDD to the necessity of having a pro-gram, having code that really re-flects that model, and bridging that gap is difficult; thats why there was so much emphasis on that. But I really think that the way I arranged the book gives people the wrong emphasis. So thats the biggest part of what Id do is rearrange those things.

    It makes a lot of sense I guess. I agree that strategic design is really, really important and re-ally one of those things that a lot of people dont think about when they hear the term DDD. Recently, we have seen a trend towards microservices archi-tectures and weve already have had quite a few discus-sions about microservices on the show. So how does micros-ervices fit into the DDD world? Is there erudition?

    Im quite enthusiastic about mi-croservices. I think that it helps people who want to do DDD quite a bit. And I also think that certain aspects of DDD can help people do microservices better. So when I say it helps people do DDD, Ive already talked about bounded contexts and how im-portant that is. If you think about

    what the people do when they do microservices in a serious way, the interior implementa-tion of microservices is very iso-lated. Everything is supposed to go through that interface. Any kind of data that they contain is exclusively held by them. Theres really a tight boundary and that is what you need.

    The bounded context is a concept which in more tradition-al architectures, there werent very good ways to implement that concept; to really establish the boundary. So it seems to me that microservices has delivered us a practical and popular way of defining and sticking to those boundaries. And thats a big deal. And the emphasis on the micro, well, someone once asked me, Whats the difference be-tween microservices and the old SOA services? And I said, Well, I think part of it is the micro. These services are smaller. Thats just a convention, of course, but its an important convention. The idea that a very small piece of software would be very isolat-ed and doing their own thing. If you think about my example of the order taking versus shipping, of course those are too big to be a single microservice. It would probably be a little cluster of them each. But this notion that you would take them separate-ly would come very natural in a microservices architecture. So thats one way, the big way, in which I see that it helps DDD.

    So when you say that shipping an order would be a cluster of microservices, does that mean that you would think that the bounded context would be a cluster of microservices? Is that what you are saying?

    That is exactly what Im saying. And this, by the way, kind of

    points into where I think DDD can be helpful to microservices, because they have the same problem that SOA had in the sense that there is a vagueness about who can understand the interface of a service.

    So within a bounded con-text, lets say the interior of a mi-croservice, there are things that are crystal clear, or at least they could be. So lets say that weve declared one of these microser-vices to be a context and every concept in there is very consis-tent throughout. So we have said that an order line item means this, and it has certain properties and certain rules about how you combine it with other line items and whatever; all of these things are very clear in terms of their language and their rules.

    Now we go to the interface of the service. And so there we would have certain language -- unless you view a service thats just some strings and numbers going in and out. But thats not the way people view services and not the way they do well-de-signed services. Well-designed services have a kind of language about what they do. They have a contract.

    So if we say, all right, well, then when you send an order to this microservice, this is what it means. And I dont just mean the fields that are in it. I mean this is the concept of what it is. Now, if you zoom out a little bit, you see that typically what people do is that they have little clusters of services that essentially speak the same language.

    So if my team is working on an ordering system, we may have a model and we might -- lets say we have five microservices and they speak the same language and weve worked hard to make that language clear. And then over here were saying, well, we really are dealing with a different set of problems. These microser-

  • Architectures you Always Wondered About // eMag Issue 31 - Aug 2015 17

    vices speak a different language. If you send an order over here, it might mean a little bit different thing. And even if we try to dis-ambiguate with longer names, we wont always get that right.

    So its better to just say over here is a different language. And that means that if a message goes from a microservice within the cluster to another one, thats going to be really easy. But if a message goes from one of these clusters to another, we might need to put it through a compo-nent that can translate it. So this is one way in which I think that once we look at multiple micro-services, we need to think do clusters of them belong in differ-ent bounded contexts?

    Theres also the issue of the inside and the outside. The out-side of these -- micro services are what Im really talking about now -- the outside of a microservice might speak a different language than the inside. You know you might say pass an order to this microservice or pass this stream of orders to this microservice. Inside its going to crunch away with a model that views orders in a statistical fashion, lets say, and comes up with some recommen-dations or something.

    Well, the interior then is us-ing a quite different model. The interior of that microservice is a different bounded context. But as I said, that whole cluster is speaking the same language to each other. So we have this inter-change context where we define what messages mean as they move between contexts and then we have the interior context of each service.

    It makes a lot of sense. So what Im wondering is if a bound-ed context is usually a cluster of microservices, is there any way that you can think of to

    tell whether certain function-ality should be implemented in a microservice on its own or just be part of another mi-croservice? Because obvious-ly if there is a cluster that is a bounded context, its not one bounded context is one micro-service, its a set of microser-vices. So Im wondering wheth-er there is a rule that would give us an idea of whether we should break this functionality apart into an additional micro-service and a bounded context doesnt seem to cut it.

    So first of all, yeah, if a cluster of microservices is a context, or rather the exterior, the message passing between them would be a microservice, and then inside of each is a -- sorry is a bound-ed context -- and then the in-side of each is another bounded context. But now youre saying, well, suppose that we have a new functionality we need to put somewhere, should I put it as another microservice in this cluster?

    Well, I think that this is ba-sically though the same kind of question we always have to answer when were designing things. Does this piece of func-tionality fit within some existing structure? Or is it going to start distorting that structure out of a good shape? And if so, then where else should I put it? I think the factors that go into that, Im not being too original here, is how coupled is it to the other functions within the cluster? Like if theres a necessarily a chatty relationship with three different components of that cluster, then it seems very likely were going to want to keep it in that cluster.

    Another factor though would be the expressiveness of that cluster. The expressiveness of that particular bounded con-texts language - does it express

    this concept well? Can I extend that language to express this concept well? And if so, then it might be a good fit. If not, then how much price am I going to pay in proliferation of different bounded contexts? You know theres a tradeoff, of course.

    So theres no real answer there. Its like heres where we have to do good work as design-ers.

    And thats probably the hard part about it.

    A little trial and error helps too. Thats another reason to not be too perfectionist. You wont get it right anyway and save time for it-eration. Go back and do it again.

    Yeah, and do it better the next time. Okay. So you already mentioned the term CQRS. Can you explain what that is?

    I remember trying to understand that for a couple of years. So I will say that event sourcing and CQRS came along at almost the same time, and the community of people that were working on them was very interwoven and the two things were not so clear-ly delineated; certainly not at the time. But I do think there are two distinct architectural concepts there. They often work well to-gether but its useful to think of them separately.

    The one that immediate-ly made sense to me that just spoke to me instantly was event sourcing. And then CQRS was a little bit less obvious to me. But I do think its a good technique. So in essence, CQRS says that you break your system into compo-nents that either are read-only

  • Architectures you Always Wondered About // eMag Issue 31 - Aug 201518

    things or they are things that process updates.

    So lets take the order tak-ing; you know that ordering ex-ample. When a new order comes in, in CQRS wed put that in the form of a command. A command meaning the C in CQRS. And the command would be enter this order or something like that. Now, it goes to some microser-vice, lets imagine, whose job is to take this kind of command and see if it can be done; like it might say, Oh, Im not going to take this order because we no longer carry that item.

    So commands, as theyre defined in CQRS, you can say no. So that would be the response to that. Or lets say, okay, we do go ahead and we process the order and we reduce the inventory and we initiate a shipment and we send a message about that. Some events come out: some events that say things like the inventory has been reduced and another event that says theres a new or-der that has to be shipped. This is the responsibility of that com-mand processing part.

    Now the query, thats the Q, sometimes youd say, well, a user might want to look at the catalog, decide what he wants to order. The user might want to see the status of his order, Has my order been shipped yet? things like that. So this is the Q part. I want to see the status of my or-der. And the idea is that this part of the system would be kept very simple. There would be a part of the system where youd have to figure out how to ship an order. But once it had been shipped, youd update the status in a que-ry part that would say this order has been shipped.

    So queries that way can scale differently than the com-mand processing. And in a sys-tem where you have to do a lot -- if this were an e-commerce system where we were handling

    thousands of orders a minute, but maybe were handling even more queries, but we can scale them independently, we can recognize that queries take less processing power perhaps; that since theres no change hap-pening, we dont have to worry about consistency rules.

    So the query part is very simple and fast and scales that way. The command part is where we have to deal with all the issues of, well, what if a command came in to cancel the order and weve already shipped it, what are the rules around that? Does the com-mand still get processed? I mean it will get processed but does it still get cancelled? On and on. All that rule stuff goes into that, figuring out how to respond to a command.

    So we should probably explain that CQRS is Command Query Responsibility Segregation if I remember correctly?

    Yes.

    You already said that there is a relation to event sourcing. It seemed to me that the C part, the commands, are the events in event sourcing. Is that what the relationship is like?

    Well, I think you could have an event source system that was not CQRS. So for example, you could just have a module that responds to queries and also can process commands, and if you had that you wouldnt really be practic-ing CQRS because you wouldnt be separating them. But another thing is that in event sourcing, lets say that we have an order object. The old traditional OO way of doing this is that that or-der object might have a -- it says

    its been shipped. In event sourc-ing we say, well, we dont have a field or anything like that. What we have is a series of events and when it shipped there is created an event has shipped.

    So when we want to know the status of that order, we just go and find the events relevant to it and then roll them up. The classic example might be -- well, Ill use the example I first heard Greg Young use to explain the point.

    So lets say that you are do-ing some kind of stock trading application. Someone says sell 10,000 shares of IBM above a cer-tain price. So this order goes out. Its 10,000. And now the question comes, well, how many shares are still to be sold? So each time we execute this order, lets say we sell 1,000, and then in a separate transaction we sell 2,000 more. So here we have two events -- really three events. One was sell 10,000, and then there were two events that said: we sold 1,000, and then another event that said we sold 2,000. Now the question is how much remains to be sold? How much IBM should we sell at that price? At the time of the query, we can find what events are visible to us and we can cal-culate it.

    So in the old days, wed have had that object and it would have had 10,000 and then the first sale comes in and wed subtract 1,000. So now it would say sell 9,000 and then another 2,000 come in and wed say sell 7,000. And in event sourcing sys-tems dont even have that field. Or if you do, its an immutable field that expresses the original order to sell 10,000 and then youve got a completely separate object, an event object, that says we sold 1,000 and another one that says we sold 2,000. If you want to know, you can figure it out. You look at all those events

  • Architectures you Always Wondered About // eMag Issue 31 - Aug 2015 19

    and you say, 10,000 minus 1,000 minus 2,000.

    And that is the concept of event sourcing basically. So what Im wondering is what is the rela-tionship to DDD then of CQRS and event sourcing?

    Well, event sourcing I think its easier to illustrate because its a modeling technique. Its talking about how you would represent the domain. If you look at before that and the emphasis on enti-ties and values, this is placing the emphasis in a little bit different place. Its saying certain things happen in the domain; in the do-main we have got an order exe-cution and that should be explic-itly modeled.

    So its really making the model more explicit about the changes. If you take the old style OO system where things change and the objects represent the way things are in our most cur-rent view, and this also is the typ-ical relational database approach too; but they dont show you what happened. They just show you how things are now. Where-as event sourcing shifts and says lets model the state change rather than the state. And we can derive the state from the state change.

    So now we say we exe-cuted an order; thats the thing we store. We executed an order again; thats the thing we store. If you want to have that other view, its just a rollup of this. So its re-ally a modeling technique and it emerged from trying to apply DDD to certain kinds of problems that were very event centric and also where they had to get very high volume. With this, you can scale up the updates because if your updates are very frequent and your reads are less frequent, for example, you can be inserting

    events into the system without having to update an object in place every time.

    The objects all become im-mutable, which has certain tech-nical benefits, especially if youre trying to scale things; parallelize things. So I think it fit into DDD so naturally because -- its real-ly a revamping of the building blocks, is one way to look at it, but its a little more radical than that.

    One thing that Im really won-dering about is if I look at DDD and in particular, well, on the model part, it really seems to be an object-oriented ap-proach, I guess, because there are those value objects entities and all these kinds of things. Its rather easy to think about how that would be implement-ed using object-oriented tech-niques. In the last few years there has been a shift to func-tional programming. So do you think that DDD can be applied to functional programming too, even though it was origi-nally expressed in a rather ob-ject-oriented terms?

    Yes, that is one of the big things thats happened over these 11 years. The reason that every-things expressed in terms of objects is because objects were king in 2003, 2004; and what else would I have described it as? Peo-ple who wanted to address com-plex domains wanted to try to develop a model of that domain to help them deal with the com-plexity; they used objects. And the building blocks were an at-tempt to describe certain things that help those kind of models to actually succeed.

    Now, if you are going at it from a functional point of view, then your model is going to look quite different, or rather your

    implementation is going to look quite different. I think that the event sourcing actually points a good way, because you know I mentioned that if youve applied full-on event sourcing, the ob-jects are immutable, which is a start toward the functional per-spective; because instead of hav-ing objects we change in place, we have some kind of data struc-ture that we use a function to de-rive another data structure from.

    So if you imagine then an event source system where -- and let me just throw micros-ervices in. You have a microser-vice, pass some events to it and it computes the results and passes out another stream of the events that say well as a consequence of this, this is what happens. So I pass in we executed a trade for 2,000 and we executed another trade for 1,000 and it passes out an event that says the order has been reduced to 7,000, whatev-er,

    So its pretty easy to imag-ine implementing that as a func-tion actually, perhaps more nat-ural than OO in fact. Youve got a stream of events and you want to use it to compute another stream of events; that really cries out for a function to me.

    Yeah, absolutely. It sounds somewhat like an Actor model even.

    Yes. Well, some people in the DDD community have real-ly been into the Actor model. Vaughn Vernon, for example, talked a lot about using Actor model. Indeed, it does seem to be a good fit. It seems like that it corresponds closely to another one of the building blocks which we havent talked about yet. In the original book, it talked about a building book called aggre-

  • Architectures you Always Wondered About // eMag Issue 31 - Aug 201520

    gate, which was sort of trying to describe a set of objects which would have rules about their internal data consistency and somehow would be allowed to enforce those rules.

    So people have said, well, if you take that unit, that unit of whatever Im trying to make con-sistent, at any given change and you give that responsibility to a single actor, now you imagine an actor is receiving events, or commands, and it has to figure out whether it can maintain that state in a consistent -- so move from one state to another in a way that respects the invariance of that particular aggregate. And so thats an application of Actor model to pulling in a little bit of the old aggregates plus events and commands. A lot has been going on when we start talking about it. The way people really build these things is so different.

    We should probably say a few a words about Actor. Well, an Actor is something that gets events from the outside and executes them sequentially. It is a model for parallel compu-tation where you have multiple Actors exchanging events and each of them works sequential-ly. But the system as a whole is parallel because all these Ac-tors work parallel on their own event streams. Thats basically the idea and that seems to be a good fit to the aggregates as you just said; the DDD aggre-gates.

    Right. And your description is a very nice summary of the tech-nical properties of these things. And if I try to describe why this is so useful when we try to -- we have these conceptual models of the domain and were trying to make a software system that

    respects these concepts and ex-presses them.

    So theres a lot of different state within a big system and an aggregate says, well, one of the things that will keep you from going parallel, like you do in Ac-tor, is that you have no boundary where you can say that the result of this computation does not im-mediately affect anything else; that we can handle that asyn-chronously. And thats exactly what aggregates do. They define a subset of the state which has rules about how you can change that state. And you say any kind of resulting changes elsewhere will be handled asynchronously.

    Thats what an aggregate does. And its related to the do-main because you have to look at the business to know what really can be changed independently; where will there be consequenc-es to getting things out of sync?

    Yes. It seems like a good fit of a certain technology or technical approach to a certain domain approach.

    Yeah, because when we first were doing the aggregate thing, well before I wrote my book, back in the late 90s at least, it was difficult to implement the aggregates; there wasnt really a technical artefact to hang your hat on. So the nice thing about Actor is that it gives you some-thing to say we have decided that were going to make each aggregate the responsibility of one Actor. Now I can really tell another programmer, okay, this is my aggregate because I made an Actor for it.

    It really helps if you can have an explicit thing. This is why, by the way, I think objects are still a valuable concept. It says that heres a software artefact that makes explicit something

    in our conceptual model; that theres this thing, an important thing in the domain that we have defined. Theres a certain state around it. Theres some rules and behaviours around it.

    Personally, Ive taken a hol-iday from OO for a few years to freshen up my thinking, but we dont want to throw out the baby with the bathwater. Making things explicit is good.

    Another technology that has been on the rise in the last few years is NoSQL. Is there any relation between NoSQL and DDD too or are they not relat-ed?

    NoSQL, of course, is not -- unlike event sourcing and CQRS -- peo-ple who came up with those con-cepts really were DDDers who were trying to come up with a better way to do DDD. Thats not true at all with NoSQL; it came from a totally different world. They were very interested in technical properties of what they were doing and so on, and above all I think the motivator of a lot these things was speed. Howev-er, I actually think that its a great boon for DDD.

    But one of the biggest handicaps that weve had for a long time is this obsession with everything being stored in a re-lational database. Data in a rela-tional database has to be struc-tured a certain way. In the days when objects were dominant, the relational database was also still the dominant database. So we used OR mappers, object-re-lational mappers. Of course, people still use these; I say it as if its in the past. And then people would talk about the impedance mismatch. Well, whats the im-pedance mismatch? It just says that the fundamental conceptual structure of an object is different

  • Architectures you Always Wondered About // eMag Issue 31 - Aug 2015 21

    from the relational table or set of tables. The way they relate to each other is different.

    The trouble here -- I think it was Eric Meyer who I heard made this point. He said when we say NoSQL, we should make No be an acronym for not only. So we should say NotOnlySQL. And his point was that the problem isnt the relational database which is a brilliant thing; theyre a won-derful, powerful tool. But when you use them for everything, then you encounter data that doesnt fit into them, that isnt the right shape, and you have to twist them into that shape. But when youre dealing with a problem that fits them, theres just nothing else -- theyre a fan-tastic tool but weve used them for everything. Its hard to find a tool that works well for every-thing, certainly objects; I think thats another problem with ob-jects because they were used for everything, and of course theyre not good at everything.

    This relates to DDD because were trying to take all these different concepts from the do-main, and we are trying to create concepts that have a tangible form in the software. That shape -- sometimes theres a natural shape to it that more often is ob-ject-like than is relational. If its object-like, maybe you do want to represent it as objects, but then you have to cram it into a relational table with relations.

    So instead of that, maybe you use a key value store, which is a very natural fit to objects ac-tually. Object structures really are just references; to references of references. Its got that same kind of tree structure that -- graph structure anyway -- though good ones have more of a tree struc-ture. So its a better fit to some kinds of problems.

    And then the nice thing about NoSQL is that its a rela-tively diverse world. Theres the

    graph databases, since I did men-tion graphs, but there are things that are really nicely modeled as graphs. If you say, How am I go-ing to model this thing? some-times people think modeling means OO modeling. Oh, I have to draw a UML diagram of it and then implement it in C# or Java. Thats not what modeling means. Modeling means to create ab-stractions that represent import-ant aspects of your problem and then put those to work.

    So sometimes the natural abstraction is a graph. You want to say, well, how do these people relate to each other? You know, the graph databases, Neo4j and things like that, allow us to choose a tool that actually fits the kind of problem were trying to solve. I dont now have to twist it into objects and then figure out how to do graph logic over objects while, by the way, Im also stuffing the object data into a re-lational database. Instead, I use a graph database and ask graph questions using a graph query language. This is the world of NoSQL to me that we can choose a tool that fits well with the prob-lem were trying to solve.

    I think the point that youre making is quite important. Obviously, what youre talking about is how those NoSQL da-tabases gives you an advan-tage concerning modeling data, while a lot of people still think that NoSQL is all about scaling and big data issues. It is one of the benefits, but its probably not even the most important one. Its more about this flexibility, as you said, and the more natural modeling and different alternatives to relational databases. So I think thats a very good point.

    Yeah, and you know I agree with you, and I think the main reason that people think of it as primari-ly a scaling technique is because thats where it came from; that was the real driver behind it. It probably took the absolute ne-cessity of those big data people having to as -- you know the equi-librium where we were so deeply rooted to the relational database, it would take something like that to get us loose. But I do think that the great opportunities for NoSQL are in the world of com-plex problems where the things we want to do just dont really fit the relational model. Sometimes they dont fit the OO model. We can choose the thing that fits.

    Thats actually a very good way to sum it up. So thanks a lot for taking the time. Thanks a lot for all the interesting answers and the interesting insights. I enjoyed it a lot.

    Oh, thank you.

  • Architectures you Always Wondered About // eMag Issue 31 - Aug 201522

    Service Architectures at Scale: Lessons from Google and eBay

    Watch on InfoQ

    Evolution of service architecturesService architectures of large-scale systems, over time, seem to evolve into systems with similar characteristics.

    In 1995, eBay was a mono-lithic Perl application. After five rewrites, it is a set of microser-vices written in a polyglot of pro-gramming languages. Twitter, on its third generation of archi-tecture, went from a monolithic Rails application to a set of poly-glot microservices. Amazon.com started out as a monolithic C++ application and moved to ser-vices written in Java and Scala. Today, it is a set of polyglot mi-croservices. In the case of Google and eBay, there are hundreds to

    thousands of independent ser-vices working together.

    Unlike the old way of build-ing services with strict tiers, these services exist in an ecosys-tem with layers of dependencies. These dependencies resemble a graph of relationships rather than a hierarchy. These relation-ships evolved without central-ized, top-down design evo-lution rather than intelligent design. Devel-opers create services or ex-tract them from other services

    or products. Sometimes they group these extracted services in a common service. Services that are no longer needed are depre-cated or removed. Services must justify their existence.

    The diagram below is an ex-ample of a set of services at Goo-

    Randy Shoup has experience with service architecture at scale at Google and eBay. In his talk, Service Architectures at Scale: Lessons from Google and eBay, he presents the major lessons learned from his experiences at those companies.

  • Architectures you Always Wondered About // eMag Issue 31 - Aug 2015 23

    gle. These services evolved to a hierarchy of clean layers. The hi-erarchy was an emergent proper-ty; it was not designed that way.

    Cloud Datastore is a NoSQL service in Googles publicly avail-able App Engine. Megastore gives multiple-row transactions and synchronous replication among nearby data centers. Big-table is a data-center-level struc-tured storage of key-value pairs. Everything is built on Googles distributed file system Colossus. At the lowest level is Borg, the cluster-management infrastruc-ture responsible for assigning resources to processes and con-tainers that need it.

    Each layer adds something not in the layer below, but is gen-eral enough to be used in the layer above. Everything at Goo-gle runs on Colossus, and almost everything uses Bigtable. Me-gastore has many different use cases. It was originally written for Google apps such as Gmail, and the Cloud Datastore team end-ing up building on it.

    This was never a top-down design. It grew from the bot-tom up. Colossus was built first. Several years later, Bigtable was built. Several years later, Mega-store came into being. Several years after that, Cloud Datastore migrated to Megastore.

    Architecture without the architectNobody at Google has the title of architect. There is no central ap-proval for technology decisions. Mostly, individual teams make technology decisions for their own purposes.

    The early days of eBay, cen-tral approval from its Architectur-al Review Board was required for all large-scale projects. Despite the great number of talented people on that board, they usu-ally got involved when it was far too late to make changes. It end-ed up being a bottleneck. The

    boards only influence was the ability to say no at the last min-ute.

    It would have been much better to have these smart, ex-perienced people work on some-thing really usable by individual teams in a library, or tool, or service, or even a set of guide-lines that people could use on their own rather than having the teams learn at the last min-ute that a particular replication style (for example) was not going to work.

    Standardization without central controlStandardizing the communica-tion between IT services and the infrastructure components is very important.

    At Google, there is a pro-prietary network protocol called Stubby. Usually, eBay uses REST-ful HTTP-style data formats. For serialization formats, Google uses protocol buffers; eBay tends to use JSON. For a structured way of expressing the interface, Goo-gle uses protocol buffers, eBay usually uses a JSON schema.

    Standardization occurs nat-urally because it is painful for a particular service to support many different network proto-cols with many different formats.

    Common pieces of infra-structure are standardized with-out central control. Source-code control, configuration-manage-ment mechanisms, cluster man-agement, monitoring systems, alerting systems, diagnostic de-bugging tools all evolve out of conventions.

    Standards become stan-dards not by fiat, but by being better than the alternatives. Standards are encouraged rather than enforced by having teams provide a library that does, for example, the network protocol. Service dependencies on partic-ular protocols or formats also en-courage it.

    Code reviews also pro-vide a means for standardiza-tion. At Google, every piece of code checked into the common source-control system is re-viewed by at least one peer pro-grammer. Searching through the codebase also encourages standardization. You discover if somebody else has done what you need. It becomes easy to do the right thing and harder to do the wrong thing.

    Nonetheless, there is no standardization at Google around the internals of a service. There are conventions and com-mon libraries, but no standard-ization. The four commonly used programming languages are C++, Java, Python, and Go. There is no standardization around frameworks or persistence mech-anisms.

    Proven capabilities that are reusable are spun out as new services, with a new team. The Google File System was written to support search and as a dis-tributed, reliable file system, oth-ers used it. Bigtable was first used by search, then more broadly. Megastore was originally built for Google application storage. The Google App Engine came from a small group of engineers who saw the need to provide a mechanism for building new webpages. Gmail came out of an internal side project. App Engine and Gmail were later made avail-able for the public.

    When a service is no lon-ger used or is a failure, its team members are redeployed to oth-er teams, not fired. Google Wave was a failure, but the operational transformation technology that allowed real-time propagation of typing events across the net-work ended up in Google Apps. The idea of multiple people be-ing able to concurrently edit a document in Google Docs came straight out of Google Wave.

  • Architectures you Always Wondered About // eMag Issue 31 - Aug 201524

    More common than a ser-vice being a failure is a new gen-eration, or version of a service that leads to deprecating the older versions.

    Building a service as a service ownerA well-performing service in a large-scale ecosystem has a single purpose, a simple and well-defined interface, and is very modular and independent. Nowadays, people call these mi-croservices. While the word is relatively new, the concept is rel-atively old. What has happened is that the industry has learned from its past mistakes.

    A service owner has a small team, typically three to five peo-ple. The teams goals are to pro-vide client functionality, quality software, stable performance, and reliability. Over time, these metrics should improve. Given a limited set of people and re-sources, it makes sense to use common, proven tools and infra-structure, to build on top of oth-er services, and to automate the building, deploying, operating, and monitoring of the service.

    Using the DevOps philos-ophy, the same team owns the service from creation to depreca-tion, from design to deployment to maintenance and operation. Teams have freedom to choose their technologies, methodolo-gies, and working environment. They also have accountability for the results.

    As a service owner, you are focused on your service, not the hundreds to thousands of ser-vices in the broader infrastruc-ture. You do not have to worry about the complete ecosystem. There is a bounded cognitive load. You only need, as they say at Amazon, a team large enough to be fed by two large pizzas. This both bounds the complexity and makes for high-bandwidth com-

    munication. Conways law plays out to your advantage.

    The relationship between service teams is very structured. Although everyone is working for the same company, you want to think about other teams as vendors or customers. You want to be cooperative but very clear about ownership and who is re-sponsible for what. Defining and maintaining a good interface is a large part of it. The other critical part is that the customer or client team can choose whether or not to use the service. No top-level directive exists, for example, to store data in Bigtable.

    Teams end up defining a service-level agreement that their clients can rely on to meet their own objectives. Otherwise, the client can build whatever functionality they need and that new functionality could become the next generation of the ser-vice.

    To make sure that costs are properly allocated, customer teams pay for the use of a service to meet the common economic incentives. Things given for free are not used optimally. In one case, a service using App Engine changed from 100% to 10% re-source consumption overnight when they had to pay for its use. Begging and pleading that client to reduce consumption did not work because that team had oth-er priorities. In the end, they got better response times with the reduced resource use.

    On the other hand, since the service team is charging for use, they are driven to keep service quality high by using practices such as agile develop-ment and test-driven develop-ment. Charging for use provides incentives for making small, easy-to-understand changes. All submitted code is peer reviewed. A thousand-line change is not ten times riskier than a hun-dred-line change but it is more

    like a hundred times riskier. Every submission to source-code con-trol causes the automated tests and acceptance tests to run on all the dependent code. In ag-gregate, Google ends up running millions of automated tests every day, all in parallel.

    Stability of the interface is important. The key mantra is never break your clients code, so you often have to keep multi-ple versions of the interface, and possibly multiple deployments. Fortunately, most changes do not affect the interface. On the other hand, you do want to have an explicit deprecation policy so that you can move your clients to the latest version and retire the older version.

    Predictable performance is important. Service clients want minimal performance variation. Suppose a service has a median latency of one millisecond, but at the 99.99th percentile the laten-cy is one second, so it is a thou-sand times slower about 0.1% of the time. If you use 5,000 ma-chines as a Google scale opera-tion might, you are going to wind up being 50% slow. Predictable performance is more important than average performance. Low latency with inconsistent per-formance is not low latency at all. It is also easier for clients to program against a service with consistent performance. The la-tency at the 99.99th percentile becomes much more import-ant as services use lots of other services and lots of different in-stances.

    Low-level details can be important even in large systems. The Google App Engine found that periodic reallocation of C++ STL containers resulted in latency spikes at a really low but periodic rate that was visible to clients.

    Large-scale systems are ex-posed to failures. While many fail-ures can happen in software and hardware, the interesting ones

  • Architectures you Always Wondered About // eMag Issue 31 - Aug 2015 25

    are the sharks and backhoes. Google has suffered network dis-ruptions because sharks appar-ently like the taste of trans-At-lantic cables and bite through them. Lots of fiber goes through lightly populated areas of the central United Stat


Top Related