A Tale of Business Disruption in Document Communications

In the middle of the 1990s, the Internet and its associated IP protocols were like a huge wave that was off the shore of the business world, but poised to come in and cause massive disruption. At that time, I ran a consulting business for telecom clients (Human Communications) and was active on several fronts to be proactive on the topic.  In the TR-29 fax standards committee, we started work on how fax communications could take place over the Internet. A small group began work on an initiative called Group 5 Messaging, whose goal was to take the best ideas of fax, email and telex and spin up the next generation of business communications. In late 1996, the Internet Engineering Task Force (IETF) held an informal Birds of a Feather (BOF) on Internet Fax.  In meetings of Study Group 8 of the International Telecommunications Union (ITU), discussions began on how to extend fax protocols to work over the Internet or on private IP networks.

On the business side, fax was very hot and even very small businesses such as pizza parlors had purchased fax machines. Corporations had been adopting fax over Local Area Networks, and companies like Rightfax, Omtool, Optus and Biscom had  very healthy businesses selling into this space. Brooktrout Technology had introduced multi-channel fax boards and drivers for Windows NT, and had built up market momentum that enabled the company to go public. But all of this fax technology was based on sending faxes over circuit-switched networks. What would be the impact of the Internet and its technology on fax and business communications?

By 1999, the business communications landscape had changed dramatically. On the standards front, the IETF had created several standards for providing a fax services via email and the ITU had referenced these standards in the T.37 standard. The ITU had also independently created a new T.38 standard which essentially extended the T.30 Group 3 fax protocol into the IP packet world. The Group 5 initiative had lost momentum, as the fax and other communications players lined up to support the new IP-based standards from the IETF and ITU which appeared to solve the problem of how to send faxes over IP.  Related standards work continued and I was active in making sure that the new T.38 fax protocol was supported under both the current H.323 call control and under the new SIP and Megaco (later H.248) protocols.

On the business side, fax was still doing well, but now had new competition. The advent of the World Wide Web had totally wiped out the Fax on Demand business that had done well in the early Nineties. Various pundits were saying that email was the future of business communications and that new portable document formats like the PDF from Adobe would be used in place of fax.  Curiously, the email experts who participated in the IETF Internet Fax work weren’t so sure. Fax had business quality of service elements which were hard to duplicate in email — notably instant confirmation of delivery at the end of a session, negotiations between the endpoints on what document formats were acceptable and the legal status of fax, where fax messages over the circuit network were accepted as legal documents for business purposes.  The IETF work group tried to upgrade email protocols to address the technical elements, but the work was hard and the path to adoption slow.

I also shifted my career and suspended my consulting business to join Brooktrout Technology and help them participate in the new Voice over IP business. But just before I left my business, I advised my fax clients and newsletter subscribers to get diversified and not put all of their eggs in the fax communications basket.  I saw both challenges and opportunities ahead. There had been a large number of new startups that had attempted to ride IP fax to success in the late Nineties, but most of them crashed and burned within a couple of years. E-Fax had introduced “free” IP fax mailboxes and that approach was quickly emulated by competitors, but the business model for “free” wasn’t obvious.  I’d helped form a new industry association called the Internet Fax and Business Communications Association in early 1999, but we had difficulty getting fax and other communications industry vendors to sign on. The times were turbulent and the way forward was less than obvious.

In my next post, I’ll talk about how the trends toward IP Fax and its communications competitors played out and which related business communications issues still need to be addressed.

If your organization has participated in the evolution of fax or other business communications during this evolution from the circuit-switched phone network to IP, please feel free to comment. If you’d like to explore strategies on how to evolve your application solutions or other communications products and services in this rapidly changing business environment, you can reach me on LinkedIn or on our web site.

Advertisements

Virtual Software: Changing Business Models

One of the best texts I’ve ever read about business models was written by Cory Doctorow, a famous writer and entrepreneur. His novel Makers was not only a great story, but virtually a doctoral thesis on how business models can change and have a radical impact on everything they touch.

A couple years ago, I helped launch a new virtualized software product line for Dialogic. The PowerVille™ Load Balancer was different in many ways from other products I’d managed. The software was totally agnostic to the underlying hardware, courtesy of a Java code base which was highly portable to multiple topologies. As a result, it fit nicely into a variety of virtual environments and also was poised to make the leap into emerging Cloud architectures, in line with trends like the emerging Virtualized Network Function (VNF) and approaches like the use of HEAT templates for configuration.

A few months into the launch, my manager and I talked about how to take this product to the next level and realized that we needed different business models for this kind of product. The traditional load balancer provided by industry leaders such as F5 was built on top of proprietary hardware platforms, and the business model followed suit. Pricing was typically based on a purchase, where all of the hardware (and software) was purchased upfront, accompanied by a service agreement which was renewed year by year.  This approach is often called the perpetual model.

But with the Cloud taking over, customers were looking for different answers. Cloud Services such as Amazon Web Services (AWS) and lots of industry software had moved to subscription or usage based business models. For example, if you buy a subscription to to a software product like Adobe Acrobat, you get the right to use the product so long as you keep paying the monthly subscription fees. Amazon went further. You can buy rights to AWS services and only pay for the usage of the Cloud infrastructure you have purchased. In the world of virtual services, this permits customers to scale up for high usage events—think of the capacity need to support online voting via text for a television program like American Idol—and then scale back down as needed, perhaps even to zero.

We considered these kinds of changes for the Dialogic load balancer, but other virtual software products at the company ended up taking the lead in becoming available under subscription or usage based models. The implications were huge. Salesreps loved the perpetual model, since they’d get a big chunk of commissions every time they sold a big box.  In a subscription or usage based model, the revenue—and the commissions—move to a “pay as you go” model. Hence, no big upfront commissions payout and you need to keep the customer happy to get that recurring revenue stream. By contrast, finance executives now had a revenue stream which was less bumpy, since there was somewhat less incentive for Sales to go out and close those end of quarter deals. Customers also like the flexibility of subscription models. Typically, they may pay more over the long haul vs. the perpetual model, but they also have the option to change to a new product or service mid-stream. In summary, the move to virtual software and related technical innovations such as Software as a Service (SaaS), Infrastructure as a Service (IaaS) or by extension Anything as a Service is likely to drag in new business models.  These new business models change the finances both on the customer and vendor side and not everybody will be pleased with the results, but momentum for these trends continues to grow.

If your organization has participated in the evolution from perpetual to subscription or usage based business models, please  weigh in with your comments. If you’d like to explore strategies on how to evolve your application solutions or other communications products in this rapidly changing business environment, you can reach me on LinkedIn or on our web site.

 

Paradigm Shift: Virtual to the Cloud

We live in a world where communication solutions can be hardware-based, run in a virtual machine on a local server or be situated in the Cloud. The paradigm for communications solutions has been shifting from hardware to software to virtualization as I’ve discussed in my recent posts. Once a solution is virtual, in principle, customers have the flexibility to control their own destiny. They can run solutions on their own premises, in the Cloud, or with a hybrid model that uses both approaches.

Let’s consider an example. Dialogic has traced this type of evolution in its SBC products.  In 2013, the company positioned two products as SBCs. The BorderNet™ 2020 IMG provided both SBC and media gateway capabilities and found an audience that wanted an IP gateway between different networks or an enterprise edge device. The BorderNet™ 4000 was a new product which focused on SBC interconnect functions and ran on an internally-managed COTS platform. Five years later, both products have changed significantly.  The IMG 2020 continues to run its core functions on a purpose-built platform, but its management can be either virtual or web-based.  The BorderNet™ 4000 has morphed into a re-branded BorderNet™ SBC product offering. The product has evolved from its initial hardware focus to being a more portable software offering.  Customers can now run the software on a hardware server, in a choice of virtual machines or by deploying on the Amazon Web Services (AWS) cloud. Whereas the original BorderNet 4000 only supported signaling, the BorderNet SBC can optionally also support transcoding of media, either in hardware (using a COTS platform) or in software. The journey of these products has offered customers more choices. The original concepts of both products are still supported, but the products now have elements of virtualization which have enhanced their portability. So as a result, the full functionality of the BorderNet SBC can run in the Amazon cloud and in the other business models.

Once a product has been virtualized, it can be deployed numerous ways and can be deployed using a variety of business models. As customers want to move solutions to the Cloud, being able to run one or more instances of software in virtual machines is essential. The term Cloud tends to be used generically, but in telecom, there are multiple ways the evolution to the cloud is playing out. One example is the OpenStack movement, where open source has helped drive what is sometimes called the Public Cloud. The various forms of private clouds have also been popular, with variations being offered   by Amazon, Microsoft, Google, Oracle, IBM and others.

In my next post, we’ll consider how the technical changes we’ve been describing here have also been coupled with changes to business models.

If you participated in the evolution described here, please feel free to weigh in with your comments. If you’d like to explore strategies on how to evolve your application solutions or other communications products / services in this rapidly changing technical and business environment, you can reach me on LinkedIn.

Following the Path to Virtualization

A number of years back, my product team engaged with a Tier 1 solution provider. They wanted to use our IMG media gateway as part of their solution, but with a condition.  They had limited rack space, so they wanted to use an existing server to manage our device. Up until then, we required customers to load our element management system (EMS) software onto a dedicated Linux server.  Instead, our customer asked us to take our EMS software and package it to run on a virtual machine. Our team investigated and were able to port both the underlying Linux layer and the EMS application for use on a Xen virtual machine. Voila! Our software was now virtualized and our customer was able to re-use their existing server to manage the IMG gateway.

That was my introduction to virtualization, but this approach quickly became much more important.  Just a few months later, other customers asked us to port our EMS software to work within the VMWare virtual machine environment. There were immediate benefits. The EMS running directly on server hardware required qualification of new servers roughly every two years, an arduous process which became more difficult over time. By contrast, the virtual EMS (which we shortened to call the VEC), would run on a VMWare virtual machine and we were isolated from any server changes the customer might make. The VEC was also a software based product, so we offered it for much less than $1000 retail price vs. the $3000+ price point of a server based version.  Over the next several years, more and more customers moved to the virtualized version of software and the demand for the server version declined.

A couple of years ago, I was asked to take over a new software-based load balancer (LB) product developed by a Dialogic software team in the United Kingdom. The back story here had some similarities to my earlier experience. The team was working with a major customer who really liked their software-based media resource broker (MRB), but had issues with the LB product offered by a major market player. The team built the software load balancer so that it could run either directly on a server or on a virtual machine. When we launched the product for use by all customers, our Sales Engineering team loaded the software onto their laptops using a commonly available virtual software program and were immediately able to set up prototype sessions and adjust configurations via the software’s graphical user interface. So the LB software was virtualized from the beginning.  This was part of an overall trend within Dialogic, as more and more of the software-based components of products were converted for use in virtual environments.

In the early days, virtualization in telecom was mainly for software tools like user interfaces and configuration, but that is now changing in a major way. The LB product from Dialogic runs in a totally virtual mode, so that operations as diverse as configuration and balancing streams of protocols as diverse as HTTP and SIP all are supported, along with very robust security. In the telecom industry, virtualization is being used in several different ways as part of a sea change where the new approach to scalability involves building additional capability and resiliency by adding new instances of software. In turn, this drives the need for new types of orchestration software, which can manage operations in a world where the new paradigm requires creating, managing and deleting instances based on real time needs.

In my next post, I’ll talk about other ways that virtualization is being used as a key principle for building out telecom operations in a variety of Cloud environments. Virtualization is still a relatively young technological movement, but it has already helped spawn some surprising developments.

If you participated in the evolution described here, please feel free to weigh in with your comments. If you’d like to explore strategies on how to evolve your application solutions or other communications products in this rapidly changing business environment, you can reach me on LinkedIn.