RSS

API Virtualization News

These are the news items I've curated in my monitoring of the API space that have some relevance to the API definition conversation and I wanted to include in my research. I'm using all of these links to better understand how the space is testing their APIs, going beyond just monitoring and understand the details of each request and response.

Testing and Meaningful Mocks in a Microservice System by Laura Medalia (@codergirl__) of Care/Of At @APIStrat In Nashville

We are getting closer to the 9th edition of APIStrat happening in Nashville, TN this September 24th through 26th. The schedule for the conference is up, along with the first lineup of keynote speakers, and my drumbeat of stories about the event continues here on the blog. Next up in our session lineup is “Testing and Meaningful Mocks in a Microservice System” by Laura Medalia (@codergirl__) of Care/Of on September 25th.

Here is Laura’s abstract for the session:

Laura will be talking about tooling for mocking microservice endpoints in a meaningful way using Open API specifications. She will cover how to set up microservice deployments processes so that with each versioned microservice deployed a mock of the service with up to date contracts will also be deployed. Laura will also show how to use tooling she and her team built to consume these lightweight mocks in unit tests and either get default mock responses or mock out custom responses for different test cases.

In an era where many API development groups are working to move to a design and mock first approach, this session will be key to our journey, and APIStrat is where we all need to be. You can register for the event here, and there are still sponsorship opportunities available. Don’t miss out on APIStrat this year–it is going to be a good time in Nashville as we continue the conversation we started back in 2012 with the initial edition of the API industry event in New York City.

I am looking forward to seeing you all in Nashville next month!

Photo Credit: Laura Medalia on Pintaram.


Sports Data API Simulations From SportRadar

I’m a big fan of API sandboxes, labs, and other virtualization environments. API sandboxes should be default in heavily regulated industries like banking. I also support the virtualization of schema and data used across API operations, like I am doing at the Department of Veterans Affairs (VA), with synthetic healthcare data. I’m very interested in anything that moves forward the API virtualization conversation, so I found the live sporting API simulations over at SportRadar very interesting.

SportRadar’s “live simulations give you the opportunity to test your code against a simulation of live data before the preseason starts or any time! Our simulation system replays select completed games allowing you to view our API feeds as if they were happening live.” Here are the details of their NFL Official API simulations that run every day:

  • 11:00 am - Data is reset for the day’s simulations.
  • 1:00 pm - PST week 1 games will run – Oakland at Houston, Detroit at Seattle, Miami at Pittsburgh, and New York at Green Bay
  • 2:00 pm – PST week 2 games will run - Seattle at Atlanta, Houston at New England, Green Bay at Dallas, and Pittsburgh at Kansas City
  • 3:00 pm – PST week 3 games will run – Green Bay at Atlanta and Pittsburgh at New England
  • 4:00 pm – PST week 4 games will run – New England vs Atlanta

Providing a pretty compelling evolution in the concept API virtualization when it comes to event data, or data that will happen on a schedule. I wanted to write up this story to make sure I bookmarked this as part of my API virtualization research. If I don’t write about it, it doesn’t happen. As I’m working to include API sandboxes, lab environments, and other API virtualization approaches in my consulting, I’m keen to add new dimensions like API simulation, which provide great material for workshops, presentations, and talks.

I feel like ALL APIs should have some sort of sandbox to play in while developing. Not just heavily regulated industries, or those with sensitive production data and content. It can be stressful to have to develop against a live environment, and having a realistic sandbox, labs, and simulations which provide complete copies of production APIs, along with real world synthetic data can help developers be more successful in getting up and running. I’ll keep profiling interesting approaches like this one out of SportRadar, and add to my API virtualization research, helping round off my toolbox when it comes to helping API providers develop their sandbox environments.


Synthetic Healthcare Records For Your API Using Synthea

I have been working on several fronts to help with API efforts at the Department of Veterans Affairs (VA) this year, and one of them is helping quantify the deployment of a lab API environment for the platform. The VA doesn’t want it called it a sandbox, so they are calling it a lab, but the idea is to provide an environment where developers can work with APIs, see data just like they would in a live environment, but not actually have access to live patient data before they can prove their applications are reviewed and meet requirements.

One of the projects being used to help deliver data within this environment is called Synthea. Providing the virtualized data will be made available through VA labs API–here is the description of what they do from their website:

Synthea is an open-source, synthetic patient generator that models the medical history of synthetic patients. Our mission is to provide high-quality, synthetic, realistic but not real, patient data and associated health records covering every aspect of healthcare. The resulting data is free from cost, privacy, and security restrictions, enabling research with Health IT data that is otherwise legally or practically unavailable.

Synthea data contains a complete medical history, including medications, allergies, medical encounters, and social determinants of health, providing data can be used without concern for legal or privacy restrictions by developers to support a variety of data standards, including HL7 FHIR, C-CDA and CSV. Perfect work loading up into sandbox and lab API environments, allowing to developers to safely play around with building healthcare applications, without actually touching production patient data.

I’ve been looking for solutions like this for other industries. Synthea even has a patient data generator available on Github, which is something I’d love to see for every industry. Sandbox and labs environment should be default for any API, especially APIs operating within heavily regulated industries. I think Synthea provides a pretty compelling model for the virtualization of API data, and I will be referencing it as part of my work in hopes of incentivizing someone to fork it and use to provide something we can use as part of any API implementation.


OpenAPI Is The Contract For Your Microservice

I’ve talked about how generating an OpenAPI (fka Swagger) definition from code is still the dominate way that microservice owners are producing this artifact. This is a by-product of developers seeing it as just another JSON artifact in the pipeline, and from it being primarily used to create API documentation, often times using Swagger UI–which is also why it is still called Swagger, and not OpenAPI. I’m continuing my campaign to help the projects I’m consulting on be more successful with their overall microservices strategy by helping them better understand how they can work in concert by focus in on OpenAPI, and realizing that it is the central contract for their service.

Each Service Beings With An OpenAPI Contract There is no reason that microservices should start with writing code. It is expensive, rigid, and time consuming. The contract that a service provides to clients can be hammered out using OpenAPI, and made available to consumers as a machine readable artifact (JSON or YAML), as well as visualized using documentation like Swagger UI, Redocs, and other open source tooling. This means that teams need to put down their IDE’s, and begin either handwriting their OpenAPI definitions, or being using an open source editor like Swagger Editor, Apicurio, API GUI, or even within the Postman development environment. The entire surface area of a service can be defined using OpenAPI, and then provided using mocked version of the service, with documentation for usage by UI and other application developers. All before code has to be written, making microservices development much more agile, flexible, iterative, and cost effective.

Mocking Of Each Microservice To Hammer Out Contract Each OpenAPI can be used to generate a mock representation of the service using Postman, Stoplight.io, or other OpenAPI-driven mocking solution. There are a number of services, and tooling available that takes an OpenAPI, an generates a mock API, as well as the resulting data. Each service should have the ability to be deployed locally as a mock service by any stakeholder, published and shared with other team members as a mock service, and shared as a demonstration of what the service does, or will do. Mock representations of services will minimize builds, the writing of code, and refactoring to accommodate rapid changes during the API development process. Code shouldn’t be generated or crafted until the surface area of an API has been worked out, and reflects the contract that each service will provide.

OpenAPI Documentation Always AVailable In Repository Each microservice should be self-contained, and always documented. Swagger UI, Redoc, and other API documentation generated from OpenAPI has changed how we deliver API documentation. OpenAPI generated documentation should be available by default within each service’s repository, linked from the README, and readily available for running using static website solutions like Github Pages, or available running locally through the localhost. API documentation isn’t just for the microservices owner / steward to use, it is meant for other stakeholders, and potential consumers. API documentation for a service should be always on, always available, and not something that needs to be generated, built, or deployed. API documentation is a default tool that should be present for EVERY microservice, and treated as a first class citizen as part of its evolution.

Bringing An API To Life Using It’s OpenAPI Contract Once an OpenAPI contract has been been defined, designed, and iterated upon by service owner / steward, as well as a handful of potential consumers and clients, it is ready for development. A finished (enough) OpenAPI can be used to generate server side code using a popular language framework, build out as part of an API gateway solution, or common proxy services and tooling. In some cases the resulting build will be a finished API ready for use, but most of the time it will take some further connecting, refinement, and polishing before it is a production ready API. Regardless, there is no reason for an API to be developed, generated, or built, until the OpenAPI contract is ready, providing the required business value each microservice is being designed to deliver. Writing code, when an API will change is an inefficient use of time, in a virtualized API design lifecycle.

OpenAPI-Driven Monitoring, Testing, and Performance A read-to-go OpenAPI contract can be used to generate API tests, monitors, and deliver performance tests to ensure that services are meeting their business service level agreements. The details of the OpenAPI contract become the assertions of each test, which can be executed against an API on a regular basis to measure not just the overall availability of an API, but whether or not it is actually meeting specific, granular business use cases articulated within the OpenAPI contract. Every detail of the OpenAPI becomes the contract for ensuring each microservice is doing what has been promised, and something that can be articulated and shared with humans via documentation, as well as programmatically by other systems, services, and tooling employed to monitor and test accordingly to a wider strategy.

Empowering Security To Be Directed By The OpenAPI Contract An OpenAPI provides the entire details of the surface area of an API. In addition to being used to generate tests, monitors, and performance checks, it can be used to inform security scanning, fuzzing, and other vital security practices. There are a growing number of services and tooling emerging that allow for building models, policies, and executing security audits based upon OpenAPI contracts. Taking the paths, parameters, definitions, security, and authentication and using them as actionable details for ensuring security across not just an individual service, but potentially hundreds, or thousands of services being developed across many different teams. OpenAPI quickly is becoming not just the technical and business contract, but also the political contract for how you do business on web in a secure way.

OpenAPI Provides API Discovery By Default An OpenAPI describes the entire surface area for the request and response of each API, providing 100% coverage for all interfaces a services will possess. While this OpenAPI definition will be generated mocks, code, documentation, testing, monitoring, security, and serving other stops along the lifecycle, it provides much needed discovery across groups, and by consumers. Anytime a new application is being developed, teams can search across the team Github, Gitlab, Bitbucket, or Team Foundation Server (TFS), and see what services already exist before they begin planning any new services. Service catalogs, directories, search engines, and other discovery mechanisms can use OpenAPIs across services to index, and make them available to other systems, applications, and most importantly to other humans who are looking for services that will help them solve problems.

OpenAPI Deliver The Integration Contract For Client OpenAPI definitions can be imported in Postman, Stoplight, and other API design, development, and client tooling, allowing for quick setup of environment, and collaborating with integration across teams. OpenAPIs are also used to generate SDKs, and deploy them using existing continuous integration (CI) pipelines by companies like APIMATIC. OpenAPIs deliver the client contract we need to just learn about an API, get to work developing a new web or mobile application, or manage updates and version changes as part of our existing CI pipelines. OpenAPIs deliver the integration contract needed for all levels of clients, helping teams go from discovery to integration with as little friction as possible. Without this contract in place, on-boarding with one service is time consuming, and doing it across tens, or hundreds of services becomes impossible.

OpenAPI Delivers Governance At Scale Across Teams Delivering consistent APIs within a single team takes discipline. Delivering consistent APIs across many teams takes governance. OpenAPI provides the building blocks to ensure APIs are defined, designed, mocked, deployed, documented, tested, monitored, perform, secured, discovered, and integrated with consistently. The OpenAPI contract is an artifact that governs every stop along the lifecycle, and then at scale becomes the measure for how well each service is delivering at scale across not just tens, but hundreds, or thousands of services, spread across many groups. Without the OpenAPI contract API governance is non-existent, and at best extremely cumbersome. The OpenAPI contract is not just top down governance telling what they should be doing, it is the bottom up contract for service owners / stewards who are delivering the quality services on the ground inform governance, and leading efforts across many teams.

I can’t articulate the importance of the OpenAPI contract to each microservice, as well as the overall organizational and project microservice strategy. I know that many folks will dismiss the role that OpenAPI plays, but look at the list of members who govern the specification. Consider that Amazon, Google, and Azure ALL have baked OpenAPI into their microservice delivery services and tooling. OpenAPI isn’t a WSDL. An OpenAPI contract is how you will articulate what your microservice will do from inception to deprecation. Make it a priority, don’t treat it as just an output from your legacy way of producing code. Roll up your sleeves, and spend time editing it by hand, and loading it into 3rd party services to see the contract for your microservice in different ways, through different lenses. Eventually you will begin to see it is much more than just another JSON artifact laying around in your repository.


API Transit Basics: Mocking

This is a series of stories I’m doing as part of my API Transit work, trying to map out a simple journey that some of my clients can take to rethink some of the basics of their API strategy. I’m using a subway map visual, and experience to help map out the journey, which I’m calling API transit–leveraging the verb form of transit, to describe what every API should go through.

One key deficiency I see in organizations that I work with on a regular basis, is the absence of the ability to quickly deploy a mock version of an API. Meaning, the ability to deliver a virtualized instance of the surface area of an API, that will accept requests, and return responses, without writing or generating any existing backend code. Mocking APIs require an API definition, and with many groups still producing these definitions from code, the ability to mock an API is lost in the shuffle. Leaving out the ability to play with an API before it ever gets built–which if you think about it, goes against much of why we design APIs in the first place.

Mocking of an API goes hand in hand with a design first approach. Being able to define, design, mock, and then receive feedback from potential consumers, then repeat until the desired API is delivered is significantly more efficient than writing code, deploying an API, and iterating on it over a longer time frame. Over the last couple of years, a growing number of services and tooling have emerged to help us mock our APIs, as well as the schema that are used as part of their requests and responses, giving birth to this entirely new stop along the API life cycle.

  • Mockable - A simple service for mocking web and SOAP APIs
  • Sandbox - A simple service for generating sandboxes using a variety of formats.
  • Stoplight Prism - An open source tool for mocking and transforming from OpenAPIs.
  • Wiremock - An open source tool for mocking APIs.
  • Postman Mock ServerYou can setup mock servers from within the Postman environment.

Mocking of API is something that organizations who have not adopted an API definition approach to delivering APIs cannot ever fully realize. When you have a robust, machine readable definition of the surface area of your API it allows you to quickly generate sandboxes, mocks, and virtualized instance of an API. These interfaces can then be consumed, and played with, and allow for API definitions to be adjusted, tweaked, and polished until it meets the needs of consumers. Shortening the feedback loop between each iteration, and version of an API–saving both time and money.

The API developers I’ve seen who have become proficient in defining and designing their APIs, and delivering mock APIs, have also begun to be more agile when it comes to mocking and virtualizing of data that gets returned as part of mock API responses. Further pushing mocking and virtualization into testing, security, and other critical aspects of API operations. Being able to mock API interfaces is a sign that API operations is maturing, allowing for costly mistakes to be eliminated, or identified long before anything goes into production, making sure APIs meet the needs of both providers and consumers long before anything gets set into stone.


The OpenAPI-Powered Mock API Server From Stripe

I showcased Stripe’s OpenAPI definition the other week, so I wanted to also highlight a side effect of Stripe deciding to be OpenAPI-Driven. Stripe recently published an OpenAPI-powered mock server, allowing Stripe API consumers to test drive, and play with the Stripe API in a simulated environment. “It operates statelessly (i.e. it won’t remember new resources that are created with it) and responds with sample data that’s generated using a similar scheme to the one found in the API reference.”

The Stripe Mock Server is written in Go, and is available on Github. You can rebuild the Stripe API mock server from an updated OpenAPI anytime. It is a pretty dead simple mock server that seems like should be standard practice for any API. Providing a simple, safe, and portable way to play with an API. I’m going to fork the Stripe Mock API and play with it, see what is possible with the tool.

I will be keeping an eye out for any other OpenAPI-powered tools out of Stripe, now they are actively working with it. Adoption of OpenAPI at this level of API provider is helpful to the rest of the community, by providing an example of how you can bake OpenAPI into your operations, but also the open source tooling these companies produce. It’s an important community effect that makes this whole API thing work so well.

Ideally, the leading API providers, with the most resources, could coordinate their efforts and deliver a suite of open source tooling. However, I’m patient, I’m just happy that big companies like Stripe, Slack, Box, New York Times are doing OpenAPI at all. I can wait for all the cool tooling to happen next. I’ll keep an eye on Stripe’s Github organization to see what pops up.


An Example Of How Every API Provider Should Be Using OpenAPI Out Of The Slack Platform

[The Slack team has published the most robust and honest story about using OpenAPI, providing a blueprint that other API providers should be following](https://medium.com/slack-developer-blog/standard-practice-slack-web-openapi-spec-daaad18c7f8). What I like most about approach by Slack to develop, publish, and share their OpenAPI, is the honesty behind why their are doing it to help standardize around a single definition. [They publish and share the OpenAPI to Github](https://github.com/slackapi/slack-api-specs), which other API providers are doing, and I think should be standard operating procedure for all API providers, but they also go into the realities regarding the messy history of their API documentation--an honesty that I feel ALL API providers should be embracing. My favorite part of the story from Slack is the opening paragraph that honestly portrays how they've got here: _"The Slack Web API’s catalog of methods and operations now numbers nearly 150 reads, writes, rights, and wrongs. Its earliest documentation, much still preserved on api.slack.com today, often originated as hastily written notes left from one Slack engineer to another, in a kind of institutional shorthand. Still, it was enough to get by for a small team and a growing number of engaged developers."_ Even though we all wish we could do APIs correctly, and supporting API document perfectly from day one, this is never the reality of API operations, and something OpenAPI will not be a silver bullet for fixing all of this, but can go a long way in helping standardize what is going on across teams, and within an API community. Slack focuses on SDK development, Postman client usage, alternative forms of documentation, and mock servers as the primary reasons for publishing the OpenAPI for their API. They also share some of the back story regarding how they crafted the spec, and their decision making process behind why they chose OpenAPI over other specifications. They also share a bit of their road map regarding the API definition, and that they will be adopting v3.0 of OpenAPI v3.0, providing _"more expressive JSON schema response structures and superior authentication descriptors, specifications for incoming webhooks, interactive messages, slash commands, and the Events API tighter specification presentation within api.slack.com documentation, and example spec implementation in Slack’s own SDKs and tools"_. I've been covering leading API providers move towards OpenAPI adoption for some time. Writing about [the New York Times publishing of their OpenAPI definition to Github](https://apievangelist.com/2017/03/01/new-york-times-manages-their-openapi-using-github/), and [Box doing the same, but providing even more detail behind the how and why of doing OpenAPI](https://apievangelist.com/2017/05/22/box-goes-all-in-on-openapi/). Slack continues this trend, but showcases more of the benefits it brings to the platform, as well as the community. All API providers should be publishing and up to date OpenAPI definition to Github by default like Slack does. They should also be standardizing their documentation, mock and virtualized implementations, generating SDKs, and driving continuous integration and testing using this OpenAPI, just like Slack does. They should be this vocal about it too, encouraging the community to embrace, and ingest the OpenAPI across the on-boarding and integration process. I know some folks are still skeptical about what OpenAPI brings to the table, but increasingly the benefits are outweighing the skepticism--making it hard to ignore OpenAPI. Another thing I want to highlight in this story, is that Taylor Singletary ([@episod](https://twitter.com/episod)), reality technician, documentation & developer relations at Slack, brings an honest voice to this OpenAPI tale, which is something that is often missing from the platforms I cover. This is how you make boring ass stories about mundane technical aspects of API operations like API specifications something that people will want to read. You tell an honest story, that helps folks understand the value being delivered. You make sure that you don't sugar coat things, and you talk about the good, as well as some of the gotchas like Taylor has, and connect with your readers. It isn't rocket science, it is just about caring about what you are doing, and the human beings your platform impacts. When done right you can move things forward in a more meaningful way, beyond what the technology is capable of doing all by itself.


Spreadsheet To Github For Sample Data CI

I’m needing data for use in human service API implementations. I need sample organizations, locations, and services to round off implementations, making it easier to understand what is possible with an API, when you are playing with one of my demos.

There are a number of features that require there to be data in these systems, and is always more convincing when it has intuitive, recognizable entries, not just test names, or possibly latin filler text. I need a variety of samples, in many different categories, with a complete phone, address, and other specific data points. I also need this across many different APIs, and ideally, on demand when I set up a new demo instance of the human services API.

To accomplish this I wanted to keep things as simple as I can so that non-developer stakeholders could get involved, so I set up a Google spreadsheet with a tab for each type of test data I needed–in this case, it was organizations and locations. Then I created a Github repository, with a Github Pages front-end. After making the spreadsheet public, I pull each worksheet using JavaScript, and write to the Github repository as YAML, using the Github API.

It is kind of a poor man’s way of creating test data, then publishing to Github for use in a variety of continuous integration workflows. I can maintain a rich folder of test data sets for a variety of use cases in spreadsheets, and even invite other folks to help me create and manage the data stored in spreadsheets. Then I can publish to a variety of Github repositories as YAML, and integrated into any workflow, loading test data sets into new APIs, and existing APIs as part of testing, monitoring, or even just to make an API seem convincing.

To support my work I have a spreadsheet published, and two scripts, one for pulling organizations, and the other for pulling locations–both which publish YAML to the _data folder in the repository. I’ll keep playing with ways of publishing test data year, for use across my projects. With each addition, I will try and add a story to this research, to help others understand how it all works. I am hoping that I will eventually develop a pretty robust set of tools for working with test data in APIs, as part of a test data continuous publishing and integration cycle.


Running Synthetic Data And Content Through Your APIs

I was profiling the New Relic API and came across their Synthetics service,which is a testing and monitoring solution that lets you "send calls to your APIs to make sure each output and system response are successfully returned from multiple locations around the world"--pretty straight forward monitoring stuff. The name is what caught my attention, and got me thinking the data and content that we run through our APIs.

Virtualization feels like it defines the levers and gears our API-driven systems, and synthetics feels like it speaks to the data and content that flows through flows through these systems. It feels like everything in the API stack should be able to be virtualized, and sandboxes, including the data and content, which is the lifeblood--allowing us to test and monitor everything.

It also seems like another reason we'd want to share our data schemas, as well as employ common ones like schema.org, so that others can create synthetic data and content sets for variety of scenarios--then API providers could put these sets to work in testing and monitoring their operations. A sort of synthetic data and content marketplace for the growing world of API testing and monitoring.

I see that New Relic has the name Synthetics trademarked, so I'll have to play around with variations to describe the data and the content portion of my API virtualization research. I'll use virtualization to describe gears of the engine, and something along the lines of synthetic data and content to describe everything that we run through it. I am just looking for ways to better describe the different approaches I am seeing, and tell more stories about API virtualization, and sandboxing in ways that resonate with folks.


Providing A Dedicated Test User API As Part Of Your API Virtualization Strategy

I was profiling the Facebook API as part of my API Stack work. While I only use a handful of the endpoints available to me via the Facebook API, as the API Evangelist, I feel like I should have an awareness of the popular social API. Additionally, the number of great stories I find dramatically increase with the number of API profiles that I complete.

One story I extracted from my Facebook API research is about providing a dedicated test user API. Using the test user API you can add, manage and delete test users, which you can use throughout the developing and testing of your API integration. Facebook is user- centric, but it seems like the concept applies equally to any other valuable resource made available via APIs today.

I'd file this under virtualization, when it comes to organizing as part of my overall research. Providing virtualization options for API consumers is something that is only going to grow with the Internet of Things, and privacy concerns. API providers should be looking at how they virtualize entire APIs using modern approaches to containerization, so they can be used in dev, qa, and production environments, but they should also be looking at providing data and content virtualization solutions like Facebook does with a test user API.


The Growing Need For API Virtualization Solutions

This conversation has come up over 10 times this month, at Defrag, APIStrat, and online conversations via Skype and GHangouts. The concept of API virtualization solutions. I am not talking about virtualization in the cloud computing and Docker sense, although that will play a fundamental role in the solutions I am trying to articulate. What I am focusing on is about providing sandbox, QA environments, and other specialized virtualization solutions for API providers, that make API consumers worlds much easier. 

I've touched on examples of this in the wild, with my earlier post on API sandbox and simulator from Carvoyant, which is an example of the need for virtualization solutions that are tailored for the connected automobile solution. Think of this, but for every aspect of the fast growing Internet of Things space. The IoT demand is about the future opportunity, I've talked about the current need, when I published, I Wish All APIs Had Sandbox Environment By Default.

I see envision 1/3 of these solutions being about deploying Docker containers on demand, 1/3 being about virtualizing the API using common API definitions, and the final 1/3 being about the data provided with the solutions. This is where I think the right team(s) could develop some pretty unique skills when it comes to delivering specialized simulations tailored for testing home, automobile, agriculture, industrial, transportation, and other unique API solutions.

We are going to need "virtualized" versions of our APIs, whether or not it is for web, mobile, devices, or just for managing APIs throughout their life cycle. You can see a handful of the current API virtualizations out there on my research in this area, but I predict the demand for more robust, and specialized API virtualization solutions is going to dramatically increase as the space continues its rapid expansion. I just wanted to put it out there, and encourage all y'all to think more about this area, and push forward the concept of API virtualization as you are building all the bitch'n tooling you are working on.


To Incentivize API Performance, Load, And Security Testing, Providers Should Reduce Bandwidth And Compute Costs Asscociated

I love that AWS is baking monitoring testing by default in the new Amazon API Gateway. I am also seeing new service from AWS, and Google providing security and testing services for your APIs, and other infrastructure. It just makes sense for cloud platforms to incentivize security of their platforms, but also ensure wider success through the performance and load testing of APIs as well.

As I'm reading through recent releases, and posts, I'm thinking about the growth in monitoring, testing, and performance services targeting APIs, and the convergence with a growth in the number of approaches to API virtualization, and what containers are doing to the API space. I feel like Amazon baking in monitoring and testing into API deployment and management because it is in their best interest, but is also something I think providers could go even further when it comes to investment in this area.

What if you could establish a stage of your operations, such as QA, or maybe production testing, and the compute and bandwidth costs associated with operations in these stages were significantly discounted? Kind of like the difference in storage levels between Amazon S3 and Glacier, but designed specifically to encourage monitoring, testing, and performance on API deployments.

Maybe AWS is already doing this and I've missed it. Regardless it seems like an interesting way that any API service provider could encourage customers to deliver better quality APIs, as well as help give a boost to the overall API testing, monitoring, and performance layer of the sector. #JustAThought


How Open Source API Proxies, And Other API Services And Tooling Can Strategically Partner

I was gathering my thoughts today around API management solutions can better work together, in response to an industry discussion going on between several creators of open source tooling. The Github thread is focused on proxies, and how they can work more closely together to facilitate interoperability in this potentially federated layer of the API economy, but as I do, I wanted to step back and look at bigger picture before I responded. 

As I was gathering my thoughts, I also had an interesting conversation with the creator of API Garage, in which one of the topics was around how we encourage API service providers to want to work closer, but this time from the perspective of a API client workspace. This made me think, that the thoughts I was gathering about how open source proxies can better work together, should be universally applied to every step along the API lifecycle.

Open APIs
It should be a no-brainer for API service providers--have an API! One of the best examples of this, is with 3Scale API management--they have a very robust API, that represents every aspect of the API management layer. Other service providers like API Science, and APIMATIC, who server other stops along the API life-cycle, also have their own APIs. If you want your API tooling to work with other API tooling, have an API--think about Docker, and their API interface, and make your tools behave this way

Open Source
Provide openly licensed tooling around any service you provide. Make your tooling as modular as you can, and apply open source licenses wherever it makes sense. Open licenses, facilitate people working together, and breaks down silos. This is obvious to the folks on the API proxy thread I'm referencing, but will not be as obvious to other service providers looking to break into the market. 

Open Definitions
Speak common API definition formats like Swagger, API Blueprint, and in Postman Collections, and provide indexes using APIs.json. If consumers can import and export in their preferred definition format they will be able to get up and running much quicker, share patterns between the various proprietary services, and the open tooling they are putting to work. There are a lot of opportunities to partner around common API definitions for API deployment, and management, which would open up a potentially new world of API services that aggregate, integrate, and sync between common API platforms throughout the life-cycle.

Open Plugins
Developers never want their tooling to be a silo. Allow the extensibility of any tooling or services, from design, to management, or client, using common approaches to connectors, plugins, etc. Make plugins be API first, so that other API service providers can easily take their own APIs, craft a plugin, and quickly bring their value to another platform's ecosystem. Plugins are the doorway to the interoperability that open APIs, source code, and definitions bring to the table for platform providers.

Open Partnerships
Open APIs, source code, definitions, and plugins all set the stage for ever deeper levels of partnership. APIs are the pipes, with open source code and definitions providing the grease, and if all API design, deployment, management, monitoring, testing, performance, security, virtualization, and discovery providers allow for plugins, strategic partners can occur organically.

All of this really sounds familiar? It really sounds like what the API space is telling the rest of the business world, so I can't help but see the irony in giving API service and tooling providers this same advice. If you want more partnerships to happen, expose all your resources as APIs, providing open tooling and definitions, allow other companies to plug their features into your solutions, and a whole new world of business development will emerge.

With the mainstream growth we are seeing in the API space in 2015, there are some pretty significant opportunities for partnership between API service providers right now--that is, if we follow the same API-first approach, currently being recommended to API providers.


The Potential When You Are Able To Spawn Virtualized Environments For Your API

I was made aware of the Citi Mobile Challenge during a conversation last week with the Anypresence team about their new JustAPIs offering. The conversation with them has triggered several potential stories, but one that stood out for me, was the API virtualization opportunities that emerge when you use solutions like the JustAPIs gateway

To support the Citi Mobile Challenge, the bank launched a set of virtual APIs to support the event. I think their descriptions tells a lot of the story:

The APIs made available through Citi® Mobile Challenge power some of Citi's latest digital offerings around the globe. These APIs will give developers an opportunity to create solutions that could function with existing Citi technology. No customer data will be shared. Citi encourages developers to also use APIs outside of Citi to support and add additional features to their innovations.

The bank needed to mimic their production environment, but in a way that developers can still build meaningful mobile apps,without the immediate security and privacy concerns that are present when working with live systems. While I wish sandbox environments were default via all API platforms, which unfortunately is not a reality, however I think when it comes to fintech, the stakes are much higher--which smells like an opportunity for API definitions to me.

It can be a lot of work to establish and maintain both sandbox and production environments, but is something that if you do, can open up even more opportunity when it comes to supporting hackathons, providing QA environments, and much more. Additionally, if you embrace modern API definition formats like Swagger and API Blueprint in this process, as well as approaches to containerization like Docker, a whole new agility and flexibility can be realized as well.

Like API monitoring, performance, security, and other growing incentives for embracing modern API definition formats, I see the demand for API virtualization growing. When your APIs are well defined, a new world of API definition driven services emerges, making things like virtualization for hackathons, QA, load testing, simulation, as well as just a permanent sandbox possible

I have an API virtualization research area started, I just need to spend the time organizing the companies who are doing interesting things in the space, and consider some of the possible building blocks, before it is ready for prime time--stay tuned!


Launching A Data Visualization API Into Your Own Infrastructure With Lightning

I was reviewing one of the many entries in my review queue of companies who are doing interesting things with APIs, and stumbled across the data visualization API—Lightning. Their implementation grabs my attention on several fronts, but their focus on delivering their API within your own infrastructure via a Heroku button, is one of the most relevant aspects.

This approach reflects a seismic shift occurring in how we deploy APIs, and how we deploy architecture overall. You need a data visualization API, let me deploy my API into your cloud, or on-premise infrastructure using popular approaches to virtualization—developers do not need to go to the API, the API will now come to you, and live within your existing infrastructure stack.

I’ve been talking about wholesale APIs a lot lately, showcasing the white label approach by some API providers, and exploring within the evolution of my own infrastructure, and as more savvy API providers jump in on this opportunity, you’ll see more stories emerge trying to understand the shift going on. Lightning is accomplishing this with Heroku, and their embeddable button, but companies who embrace a containerized micro-service centered approach to API deployment, will have a wide open playing field for buying and selling of the wholesale API driven technology being deployed across the emerging API economy.


Virtualized API Stacks

Up until now we tend to think of APIs individually--we approach integration in terms of the Twilio API, Twitter API or the Facebook API. But as the number of public APIs has grown beyond 8K, and an unknown amount of internal and partner APIs become available, we are seeing new patterns of aggregation and interoperability emerge from companies like Singly, but also seeing automation be added into the mix by companies like Temboo, and entire backend stacks from providers like Parse.

These new aggregated or backend stacks of API driven resources can be as general as object and key-value stores, user management and other developer commodities we see backend as a service providers (BaaS) bring to the table, or they can be very personal like the photos Singly is aggregating across Flickr, Facebook and Instagram and with friends and followers across Facebook, Twitter and LinkedIn.

As I see these new aggregate and BaaS providers emerge, at an ever increasing pace, I can’t help but think--this still isn’t fast enough (its my nature, you should try being me). If I want common mobile developer resources I can adopt one of these new BaaS platforms like Kinvey or Parse, or if I need personal and social resources, which have become part of the fabric of web and mobile apps, I can go to Singly. But what if I want a more specialized network, say just for education? I might need user management, object storage, key-value storage, access to commons social tools like friends and photos across multiple social networks, but I also need access to open courseware, teacher and student directories all via a very secure, auditable, efficient stack tailored just for K-12? I will have to wait, for the next wave of startups to emerge.

While in NYC on Friday I had a great discussion with Temboo about virtualization. Not just the virtualization we’ve come to depend on with cloud computing--which is the virtualization of compute, storage, and database resources, but the virtualization of network resources and software defined networking. Companies like Nicira, Pluribus Networks, Anuta Networks, Arista, and Vyatta are emerging with new products that are allowing the virtualization of networking resource into new and meaningful network stacks for any possible implementation that you can imagine.

After these conversation with the Temboo team, over the weekend I continued to think about the potential of virtualized API stacks. Why can’t I assemble my own API stack? Why do I have to go to each API individually, or wait for new Singlys' to emerge in other verticals? Why can’t I assemble Parse, Singly, Twilio, Schoology, SendGrid into a virtualized API stack that provides not just ease of use, but the security I need to deploy a backend tailored just for K-12 education?

In this vision of the future, API providers could focus on what they do best, and not worry about every use case out there. Providers like Singly, Temboo, Parse can build abstracted layers on top of this. With this abstraction I wouldn’t be limited to just the friends and followers features on Twitter or Facebook, I could take advantage of the next generation of friend discovery tools like what Singly is delivering--in addition to the value of individual API providers.

With a virtualized approach, I could build the stack that is most meaningful for my internal, partner or public developers and if one piece of my stack is proving unreliable, I can replace it with another. APIs resources would be further commoditized, required to provide JSON definitions of their interfaces (or die a quick death), which virtual API stack platforms could use to discover and offer API resources. API ranking algorithms would emerge allowing anyone to make sure they were discovering, selecting and using the best of breed API resources in the areas that matter for any vertical.

With a virtualized API stack I could launch any specialized set of resources that I desired from the best of breed providers out there. I could blend private and public resources together and in return offer them for use in private or public environments, further blurring the lines of what is an API and how they are consumed.

As we see APIs continue to become a driving force in government, healthcare and education and bridge the online and physical worlds via our automobiles, homes, buildings and further grow within healthcare via the quantified self, potentially disrupt manufacturing with 3D printing and drive everything around us with the internet of things--the manual assembling of individual or even aggregated API stacks won’t be enough. We will need the ability to virtualize APIs stacks for any purpose within hours, not months or years.


Virtual Servers Are Better Than Shared or Dedicated Servers


If you think there is a link I should have listed here feel free to tweet it at me, or submit as a Github issue. Even though I do this full time, I'm still a one person show, and I miss quite a bit, and depend on my network to help me know what is going on.