jon alperin 2013

15
For the Evans Data Corp 2013 Developer Relations Conference Jon Alperin, Managing Director, Avaya DevConnect Program [email protected] Th C li C d D ii it f E t It bilit P The Compliance Conundrum: Decision points for Ecosystem Interoperability Programs You’ve enabled your developer and created a value-added ecosystem of 3 rd party and homegrown solutions. Whether you are a mobile platform, a cloud-based solution, or typical enterprise or consumer applications, extracting value from this ecosystem often requires a tighter go-to-market plan, and that, in turn, means ensuring that these third party solutions actually work correctly with your products and services. In this session, ’ll l k d ii it i l di i b ildi d ti we’ll explore key decision points involved in scoping, building and executing interoperability or compliance testing programs for your ecosystem. You’ll learn where there are tradeoffs to be made in various approaches, and see how you can effectively bridge from pure developer programs to value-added partner programs with direct revenue impacts. © 2009 Avaya Inc. All rights reserved. 1

Upload: jowenevansdata

Post on 12-Apr-2017

319 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Jon alperin 2013

For the Evans Data Corp 2013 Developer Relations ConferenceJon Alperin, Managing Director, Avaya DevConnect [email protected]

Th C li C d D i i i t f E t I t bilit PThe Compliance Conundrum: Decision points for Ecosystem Interoperability Programs

You’ve enabled your developer and created a value-added ecosystem of 3rd party and homegrown solutions. Whether you are a mobile platform, a cloud-based solution, or typical enterprise or consumer applications, extracting value from this ecosystem often requires a tighter go-to-market plan, and that, in turn, means ensuring that these third party solutions actually work correctly with your products and services. In this session,

’ll l k d i i i t i l d i i b ildi d tiwe’ll explore key decision points involved in scoping, building and executing interoperability or compliance testing programs for your ecosystem. You’ll learn where there are tradeoffs to be made in various approaches, and see how you can effectively bridge from pure developer programs to value-added partner programs with direct revenue impacts.

© 2009 Avaya Inc. All rights reserved. 1

Page 2: Jon alperin 2013

The Avaya DevConnect Program is one of the leading B2B developer programs in the communications industry, supporting customers, Avaya channel partners, technology partners and competitors alike in the development of value-added and interoperable solutions that leverage Avaya products and technologies.

The DevConnect Program takes a 3 phase approach towards growing our ecosystem, spanning developer enablement, business development and Go to Market programs. Within this third phase, DevConnect Compliance Testing programs are used to validate interoperability for the benefit of our customers, channels and sales teams, and Avaya support teams, as well as those technology companies, both friendly and competitive, that participate in the program.

© 2011 Avaya Inc. All rights reserved. 2

Page 3: Jon alperin 2013

Interoperability programs sit in the overlap between pure developer programs and partner programs. Each type of program will generally have their own sets of stakeholders, and it is in the intersection between them that you can see where the influence and benefits of formalizing an interoperability or compliance program benefits both audiences.

3

Page 4: Jon alperin 2013

Interoperability programs can be found under many names. And in some cases, the terminology used to describe a program can have a specific and direct bearing on the scope of a program, or even a legal implication to consumers, partners and your own company.

When considering launching a formal program, consider which term really describes the business goals best.

4

Page 5: Jon alperin 2013

For the remainder of this session, I’ll be focusing on the “Big W” questions – the why’s, who’s, when’s, where’s and hoW’s (yeah, I know… but it still has a ‘W’ in the word).

In considering and designing a program, these are a number of decision points to be made along the way.

5

Page 6: Jon alperin 2013

First and foremost, of course, you need to clearly understand why you are building a program at all.

Is it primarily for your own benefit, to protect your reputation with your customers and h l ? I it t t t t t th t 3rd t l ti hi hchannels? Is it to protect your revenue stream, to ensure that 3rd party solutions which

may plug in to your platform or cloud-based solution do not negatively impact your cash flow?

Do you used it to open new routes to market, filling gaps in your own portfolio? Or does your product or service offering actually require 3rd party components to be the driver for sales, in which case you are beholden to the quality of those offers in order to make your

?own money?

You may also need to consider competitive factors – do your solutions generally interoperate with other systems and solutions in your customers’ environments? Does that make it a barrier to your own sales if you cannot prove interoperability? Do you need that interoperability to be mutually acknowledged by those other companies?

O i hi i l i i i i l i i fOr is this simply a reaction to pain your organization currently experiences in terms of customer dissatisfaction or increased support costs, caused by the quality or issues others have introduced, and for which your reputation may also suffer?

6

Page 7: Jon alperin 2013

Once you’ve determine your underlying reasons for a program, you can begin to look at the target audience. Is this a service that is available to any and all of your developers? Or something you make available only to specific partners that are part of your GTM motion?

Thi d i i i t ill l l i t th ti f l d th D di thThis decision point will also play in to the question of scale vs. depth. Depending upon the size of your community, making testing programs available to anyone in your community can increase costs, or create resource constraints further down the line in terms of how much high-touch vs. low-touch efforts you are directly providing towards the oversight and execution of this effort. It also speaks to the overall timeframes you target for testing –trying to serve a large potential audience almost implies needing to automate and drive rapid test cycles, vs. longer, more complex and thorough test efforts.

In addition, depending upon the rationale for having an interop program in the first place, you need to consider if this is just pair-wise testing with a single individual partner, or whether the solutions under test are complex, multi-vendor environments that need to work in a cohesive and comprehensive manner.

And finally, you may also need to consider how competitive solutions fit in the scope of h ff A ll i i i i i i h hthe effort – Are you allowing competitors to participate in testing, or are there other checkpoints in place to determine rationale for participation?

7

Page 8: Jon alperin 2013

Another point to consider is the overall scope of testing, which will drive further decisions regarding the type of test bed and test tools needed, as well as the overall timeframes involved in testing.

F l ti b i i t f lid ti ill ll ffi B t th d dFor many solutions, basic interface validation will generally suffice. But the deeper and more critical the GTM elements are, the more you need to look at functional validation of a solution, possibly even going as far as testing operational issues, scalability, reliability, performance and more.

Page 9: Jon alperin 2013

When designing a test plan, one must consider the scope of the test effort, and how iti t t d d t h ibilitiyou position to customers, end users and your ecosystem where your responsibilities

for ensuring completeness and accuracy end.

Is this just focusing on proper API usage or conformance to a specific industry standard? Will you provide test tools that enforce a minimum set of functional capabilities? Will these tools test boundary conditions and error conditions that are unlikely to occur in normal operations, but for which 3rd parties should be able to handle without issue?

It’s also important to consider inbound vs. outbound testing. In some cases, it is the 3rd party application driving and invoking functionality offered by your platform/service/product. In other cases, you may be expecting those applications to deliver specific functionality or react in very specific ways to support your own value proposition. Do your test plans reflect those cases?

And in some cases (particularly for B2C markets) will you be making non-technicalAnd, in some cases (particularly for B2C markets), will you be making non technical judgements as to suitability of an application for your stamp of approval (the so-called Morality judgement)?

You’ll also want to consider the timing of testing – Will you be testing alpha or beta releases from your developers, or only GA-candidate products? Will you be doing testing on beta releases of your own products? And what does this mean downstream for support purposes if these tests are completed on pre-GA products?

Also, consider how public your test plans are. How much does a potential developer need to know prior to committing to test about the specifics of the test plan or the minimum criteria? How much do channels and customers need to know about the specifics of test scope?

9

Page 10: Jon alperin 2013

When it comes to actually executing tests, there’s a lot of flexibility and different ways to do this. But you also need to consider what level of control and insight you may be giving up especially as it relates toalso need to consider what level of control and insight you may be giving up, especially as it relates to your GTM objectives.

First, you may have the ability to outsource testing to a third party firm like TekVizion or AppLabs. But you may also want to maintain the expertise with these 3rd party applications and solutions in-house. Then again, keeping one or more full time employees sitting around to do testing means having a pretty good handle on what sort of demand you’ll have for testing, and the timeframes and investments in human capital necessary to support those tests.

Then you need to decide where to do the tests. Not everything can (or should) be done in the cloud. In y y g ( )fact, for some solutions, it may be impossible to do testing anywhere but in a controlled lab environment. And while that lab could be your, it might need to be your partners. Which means one or both of you will need to send people and products to spots all around the world.

And never underestimate the legal side, from customs to simply protecting the intellectual property of others (especially if you are testing multiple independent companies who might otherwise compete with one another at the same time and in the same facilities).

When you consider test execution, you also need to give consideration to the extent by which is can be automated and the tools necessary to do this Does a one size fits all test plan really work for you andautomated – and the tools necessary to do this. Does a one-size fits all test plan really work for you and your ecosystem? Or does the actual application under test require a human being to ‘drive’ the test plan, and adjust to real-world differences and fine tune test efforts in real time?

Would you support a more informal self-test if it were possible to provide a standard tool and test plan? Does that change your ability to provide support? And what sort of proof points would you look for – a simple “say so” by the vendor, or a specific output from your own tools?

10© 2011 Avaya Inc. All rights reserved.

Page 11: Jon alperin 2013

It’s important to realize that just like your own products, testing is unlikely to be exhaustive. There will always be cases and conditions that simply would not be cost justified to put through the program. Moreover, every feature of your product may not be used the same way (or at all) by your ecosystem. So forcing tests that simply do not match the functionality being exploited could lead to a high, erroneous failure rate.

And don’t assume your own products are bulletproof. Testing will undoubtedly uncover a problem in your own product – the question is whether you consider that a failure on the part of the 3rd party.

And never underestimate the politics here – he who controls the checkbook is going to have a big say, especially if the testing costs are high. Even the fact that a solution is under test may be something that a company does not wish to have made public, until they have positive results to report. And even sales opportunities can put pressure on an organization to declare a test “good enough” even though there are g g g gknown bugs or failed test cases, much like products ship out the door with their own shortcomings noted in release notes (or buried deep in documentation).

11

Page 12: Jon alperin 2013

No matter what, someone has to pay for all this effort. You’re going to need people to manage the program, develop the test plans and tools, and execute or review test results. Depending upon your solution, you may need lab facilities, specialized test beds and other tooling, etc.

If you’re lucky enough to have deep pockets, you may be able to fund it completely out of your pocket, which gives you great control over how things operate, and lowers barriers to participation by others.

On the other hand, you may want to artificially raise those barriers, and set a price point that generally discourages only the most serious developers from pursuing a formal program with you. This can help quite a bit with managing the cost, and allowing for prioritization and focus of your resources. These costs can be break-even, or even profit-making efforts if the end result of having your seal of approval on a 3rd party solution helps the vendor drive their own revenue. y

Or you can share the costs, perhaps building recovery of costs incurred from testing in to a GTM model.

You may also want to consider a tiered approach, with different types of tests and different price points (and benefits).

12

Page 13: Jon alperin 2013

Finally, you need to answer the “What’s in it for me?” for your developers. Even at no cost for the testing, developers are still investing their own time and energy in to this process, and they want to understand what they get out of this program.

Is it just a statement of conformity of compliance, posted to your website? A checksheet of test result coverages? Or are they getting some level of additional documentation that offers value to them, their sales channels and their customers?

Does this include marketing elements, from basic logo’s and marks that they can use, to certificates and plaques that they can display? Does it give them PR and other marketing benefits, or link them to GTM Programs like app stores?

And what is lifecycle expectations? Is the testing recognized as valid forever, or just for the current major release? Or even more limited to a specific minor release? And does it matter if the vendor changes their own products in ways that have no obvious g yor direct bearing on the point of interoperability?

What about when there is an API change? Or, more interestingly, when the API doesn’t change, but the data exchanged across it does due to new functionality?

All of these are considerations that your developers will chew on as they determine whether the investment in your program is as good for them as it is for you. y p g g y

13

Page 14: Jon alperin 2013

In summary, it’s all about understanding the tradeoffs. Having a very open testing program based on a specific, well-defined set of compliance criteria, intended to drive high volumes with little hands-on expertise and even less long-term responsibilities is a very different program scope, cost and investment model that one that serves a different GTM need.

Tradeoffs abound, from how you handle competitive products within the scope of your test activities, to the implications of making “moral” decisions to put a stamp of approval on a certain type of application. Even test case failures aren’t necessarily as definitive as one might think, depending upon the end goals.

14

Page 15: Jon alperin 2013

Thank you.

15