Connect with us

Expert Opinion

Geoff Mulligan: A ‘Dumb’ Way to Build Smart Cities

Published

on

In every corner of the country and around the world, leaders are trying to make their cities “smarter.” These projects are often in response to specific and on-going demands — such as parking, overcrowding, noise, and pollution — while others have started to address broader goals — such as reduction of energy consumption, improvement of traffic flow, or sustainability. But as is often the case with grand ideas, many are taking the wrong approach. It’s simply impossible, in one sweep, to build a Smart City. Just as the internet and the web didn’t spring forth fully formed, as if from some “master plan,” Smart Cities must be built as organic, independent, and yet connected pieces. As Stewart Brand cogently argued, even buildings have to learn in steps.

A Smart City roadmap is invaluable, laying out a direction to help set expectations. However, it shouldn’t define specific cross-system technologies and implementation details, nor plan for all projects to launch or complete simultaneously. They must instead be created as separate solutions for each problem, then stitched together by open standards and open Application Programming Interfaces (APIs) and each built as an independent service. That’s how they must grow if we want them to succeed — learning by iteration.

Today’s problem

In the rush to “capture the market,” companies are selling “complete visions” — though incomplete solutions – of how their systems can solve the ills that plague the modern city. City planners, managers, and officials get sold the idea that these companies have some kind of silver bullet that, in a single solution, integrates all city functions and enhances their capabilities, thus making them work together efficiently. But this belies the true nature of the problem: none of us are smart enough to fully appreciate or understand the complexity of managing all the functions that go into making a city work. The sheer diversity of the systems ensures that no single technology can be applied as “the” solution. In addition, the timeframe for implementing these disparate programs can vary widely, meaning that technology selected at the start of one project will likely be obsolete by the start of another.

Worse yet, these companies are also selling and deploying products that are based on closed, proprietary systems. They include proprietary radios, single-purpose hardware, proprietary software and protocols, and closed web applications and portals. These designs constrain innovation and interfere with interoperability between newer and older systems, often saddling the new with the constraints of the past. This is like the Trojan Horse — a solution that requires all future systems to use these proprietary systems and thereby locking the city into that particular vendor for the rest of their days, limiting design and technology choices and stifling innovation and adoption of newer technology.

It’s not all gloom and doom. With the application of open systems and implementation of a service-oriented architecture, future technology can be built that’ll integrate more seamlessly with previous technology investments.

Choose a different path

We’ve learned from the lean-agile community to build success in small, incremental steps rather than one grand leap. But with the different needs, design patterns, and timeframes, how is it possible to accomplish building a Smart City in small steps? It’s done by leveraging the nature of the internet itself, complete with open standards and open APIs. By decoupling every system and eliminating hidden interfaces, we can relieve the pressures of time and technology interdependencies, thereby allowing greater innovation in each separate project while “future proofing” the design decisions.

We use different materials and architecture to construct buildings with differing purposes (hospitals vs. homes vs. high-rises), but there’s a consistency even within these varying buildings for standard electrical and plumbing connections. Smart City projects can adopt this same design pattern. This means that for a parking project, the city can pick the most appropriate communication technology but require that the system be built on open standard protocols that underlie the internet (for example, HTTP, IP, TCP, and MQTT), use data formats such as JSON or XML, and have open APIs.

Greater than the sum of the parts

Instead of a complete Smart City that’s decades in the making, city managers can instead look for “low-hanging fruit” or “greatest pain point” and more quickly build a point solution, knowing that it can simply be connected to any future systems in a scalable and secure manner. A smart parking system for city streets or a parking garage built using LoRa today can be connected to a city traffic management system built using NBIoT next year, as long as both use open APIs and avoid closed, proprietary solutions including “walled garden” cloud solutions.

The next city improvement project — a smart street light system, for example — might require a completely different communication technology from the previous parking system. Streetlights are up high and more distributed than parking meters or parking spaces in a garage. Streetlights have power, whereas a parking sensor will likely be battery-operated. These different requirements would necessitate the use of different communication technologies, but both systems can be interconnected through common protocols and APIs. Through open APIs, this interconnectivity doesn’t need to be designed in from the beginning but can be added after each of the separate systems is installed.

For example, the streetlight system that’s installed today could be connected to traffic flow sensors installed tomorrow. The two systems may use completely different communication technology and set of protocols. This new combination — streetlight and traffic flow sensors connected through open APIs — could offer an innovative solution for reducing streetlight energy usage by dimming lights when there are no cars, but increasing the brightness prior to the cars arrival based on messages from the traffic flow system.

The use and adherence to open APIs and microservices brings another benefit — decoupled velocity. This means that even concurrent projects can be built at different speeds and rolled out at different times and yet combined when each is completed and functional. As in the example above, the smart streetlight project might end up taking longer to deploy because of the sheer number of devices. Where as the traffic flow sensors might be installed sooner. Open APIs release each system from timing interdependencies and implementation speed.

Vendor lock-in and future-proofing

Another benefit of open standards and APIs is the elimination of vendor lock-in, which is when a vendor wins all future business because they alone are holding the keys to the design and the data. Vendor lock-in squelches innovation: you’re only as innovative as the vendor wants to be or lets you be. If a city needs a design or solution that isn’t in the vendor’s current portfolio, the city’s choices become wait, pay more to have the vendor add it to their roadmap, or go outside the ecosystem and use some sort of gateway (but gateways are evil, see below) to translate protocols and data and interconnect the systems.

Instead, open standards and APIs bring the ability to incorporate and evolve with newer technologies and systems. But, much like vendor lock-in, you can run afoul of technology lock-in. Imagine having built a Smart City project requiring the use of videotape and now not being able to adopt streaming technologies because they’re incompatible. Technology changes rapidly; in just a few years, we’ve moved from 2G to 3G and now to 5G in the cellular environment. By using open standards to decouple the higher-layer protocols from the lower layers, technology can evolve and systems using older tech can easily interconnect. In this way a system deployed using 4G today can interoperate with 5G systems tomorrow and 6G and 7G systems in a few years.

The underpinnings of innovation

Avoiding vendor and technology lock-in is critical to allow for innovation. Nothing will be more detrimental to a city’s infrastructure and future than to be bound to a vendor and have to ask for permission to enhance or extend the systems’ functionality. As new technology comes to the market and new services are brought out to solve other city issues, the ability to quickly test and connect them to existing solutions is the necessary for offering evolving solutions and bringing more opportunities for innovation and cost reductions. When you embark on your next project, ask your vendors — “do you use open standard protocols?” and “how are your APIs and data published?”

Avoid these traps — the ‘evil’ gateway and ‘private clouds’

One tool that many vendors attempt to leverage to show openness and interoperability is the “gateway.” They claim that they provide, or can build, a gateway to connect to other systems. Gateways are a never-ending trap on so many levels:

  • they’re a single point of failure;
  • they’re a single point of attack for hackers;
  • they require complex coordination between systems;
  • maintenance and updates are costly or non-existent;
  • updates need to be managed;
  • they add extra costs for hardware and power; and
  • they’re closed and proprietary.

The second trap is private clouds and walled gardens. The vendor will claim that they use “all of the open internet standards,” listing protocol after protocol, but they use these protocols only to send the data (your data) into a closed, proprietary cloud system — locking it away so that only they have the keys. This is akin to building a road that leads to a cul-de-sac, which is blocked by a locked gate that only lets traffic in. Then, new systems must be built to connect through this cloud, likely via closed and proprietary interfaces. In the end, only other systems in this closed ecosystem can be used for future projects, thereby limiting innovation and increasing time and costs. Sending data to the cloud isn’t a panacea, as many vendors would like to suggest.

Who owns the data — that is, your data

In Smart City projects the goals of improving city services or infrastructure are the leading driver for implementation but the greatest benefits will come from the availability of the data gathered from these projects and new systems. Unfortunately, many of the Smart City systems being proffered today lock away access to the data in walled gardens, as mentioned before. It’s imperative that the data is sent to city-owned and managed servers, or the city’s data lake or available without license through open APIs. Only in this way will the city and future Smart City projects be able to use and leverage the wealth of information and the underlying real value of these types of projects.

A related concern surrounding data ownership is the rights to the use and sale of the data created by the Smart City project — a valuable commodity. Throughout the life of the project it should be clear that the city owns all rights to the data. The vendor may not access, distribute, or sell any of the data whether in raw form or aggregated without the explicit permission of the city. Only in this way will you be able to protect the rights and privacy of the city and it’s citizens.

Choosing the right project

By adopting open standards and APIs, you’re now able to embark on a Smart City project without having to solve all other city projects at the same time or constrain them with the choices made today. But choosing the “right” project is important. In some cases, it’s prudent to choose a small, fast, low-cost project. This allows you to get your feet wet, test vendors, accomplish a project in a short time, and hopefully succeed; but if you fail, fail fast, learn, and move on. There sometimes is a problem with these projects though: they may have little impact and they may cause others to look upon them as “ho hum.”

An alternative is to choose a project that’s a large “pain point” for the city. By definition, these projects have great visibility and impact, but may have far greater risk and take much longer to complete. They don’t generally meet the rules for lean-agile, but the small “safe” projects may not show off the true benefits that a Smart City can bring. Solve this by using divide and conquer. Rather than implementing smart parking across the entire city, choose to focus on a particularly congested city section or single parking structure.

Building success

When a city is becoming smarter by investing in a Smart City project, use this checklist to evaluate the project:

  • Does it start small and scale well? This is better than a monolithic solution that requires a gigantic investment.
  • Is it locking the city into technologies, or, even worse, vendors? Does it exclude other vendors?
  • Is it open? What protocols are used? Are the APIs published and open?
  • Did the vendor mention or require (evil) gateways?
  • Does it solve a problem for the city quickly, even if it’s only a small problem?
  • What will the city be able to learn from taking on this project?
  • Who owns the data?

Through the strict application and requirement of openness, your Smart City project can be delivered in a way that’s quick, beneficial, evolvable, and scalable. Our cities can and will become smarter and better places to live through small steps and open standards — open APIs and microservices are the foundational stepping stones to that future.

Geoff Muilligan is IoT Practice Lead at Skylight Digital and CTO for IIoT at Jabil. Past founder and Chairman of LoRa and IPSO. Former White House Presidential Innovation  Fellow on IoT. Creator of 6lowpan. This article originally appeared on the author’s web site, and is reposted with permission.

BroadbandBreakfast.com accepts commentary from informed observers of the broadband scene. Please send pieces to commentary@broadbandcensus.com. The views reflected in Expert Opinion pieces do not necessarily reflect the views of BroadbandBreakfast.com and Breakfast Media LLC.

Geoff Muilligan is IoT Practice Lead at Skylight Digital and CTO for IIoT at Jabil. Past founder and Chairman of LoRa and IPSO. Former White House Presidential Innovation Fellow on IoT. Creator of 6lowpan.

Expert Opinion

Dmitry Sumin: What to Do About Flash Calls, the New SMS Replacement

Why are flash calls on the rise and how do operators handle them to maximize revenue?

Published

on

The author of this Expert Opinion is Dmitry Sumin, AB Handshake Corporation Head of Products

Chances are you’ve received several flash calls this week when registering for a new app or verifying a transaction. Flash calls are almost instantly dropped calls that deliver one-time passcodes to users, verifying their phone numbers and actions. Many prominent apps and companies, such as Viber, Telegram, WhatsApp, and TikTok, use flash calls as a cheaper, faster, and more user-friendly alternative to application-to-person SMS.

With the flash call volume expected to increase 25-fold from 2022 to 2026, from five to 130 billion, it’s no wonder they’re a hot topic in the telecom industry.

But what’s the problem, you may ask?

The problem is that there is currently no way for operators to bill zero-duration calls. This means operators don’t make any termination revenue from flash calls, which overload networks. What’s more, operators lose SMS termination revenues as businesses switch to flash calls. SMS business messaging accounts for up to five percent of total operator-billed revenue in 2021, so you can see the scale of potential revenue losses for operators. 

In this article, I’ll discuss why flash calls are on the rise, why it’s difficult to detect and monetize them, and what operators can do about this.

Why are flash calls overtaking SMS passcodes?

Previously, application-to-person SMS was a popular way to deliver one-time passwords. But enterprises and communication service providers are increasingly switching to flash calls because they have several disruptive advantages over SMS.

First and foremost, flash calls are considerably cheaper than SMS, sometimes costing up to eight times less. Cost of delivery is, of course, a prime concern for apps and enterprises.

Second, flash calls ensure smooth user interaction, which boosts user satisfaction and retention. On Androids, mobile apps automatically extract flash call passcodes. This makes the two-factor authentication process fast and frictionless. In comparison, SMS passcodes require users to read the SMS and sometimes insert the code manually.

Third, on average flash calls reach users within 15 seconds, while SMS sometimes take 20 seconds or longer. The delivery speed of flash calls also improves the user experience.

The problem: Flash calls erode operators’ SMS revenues

While offering notable advantages for apps, flash call service providers, and end users, flash calls create numerous challenges for operators and transit carriers.

As we discussed before, flash calls erode operators’ SMS revenues because much of the new flash call traffic will be shifted away from current SMS business messaging. The issue is only going to become more pressing as the volume of flash calls grows.

So from the operator’s standpoint, flash calls reduce revenue, disrupt relations with interconnect partners, and overload networks. However, there is still no industry consensus on how to handle flash calls: block them like spam and fraudulent traffic or find a monetization model for this verification channel, like for application-to-person SMS.

Accurate detection of flash calls is a challenge

The first crucial step that gives operators the upper hand is accurately detecting flash calls.

This is difficult because operators have no way of discerning legitimate verification flash calls from fraud schemes that rely on drop calls, such as wangiri. The wangiri fraud scheme uses instantly dropped calls to trick users into calling back premium rate numbers. In addition, flash calls need to be distinguished from genuine missed calls placed by customers.

The problem is that even advanced AI-powered fraud management systems struggle to accurately differentiate between various zero-duration calls. The task requires AI engines to be trained on large volumes of relevant traffic coupled with analysis of hundreds of specific call parameters.

Dedicated anti-fraud solutions are the answer

There are only a few solutions on the market that are capable of accurately distinguishing flash calls from other zero-duration calls. Dedicated fraud management vendors have made progress on this difficult task.

The highest accuracy of flash call detection now available on the market is 99.92 percent. Such tools allow operators to precisely determine the ranges from which flash calls are sent. As a result, operators can make an informed decision on how to treat flash calls to maximize revenue and can proactively negotiate with flash call providers.

Flash call detection creates new opportunities

Our team estimates that flash calls make up to four percent of Tier one operators’ international voice traffic. Without accurate detection and a billing strategy, this portion of traffic overloads operators’ networks and offers no revenue. However, with proper detection flash calls offer a new business opportunity.

Now is a crucial time for operators to start implementing flash call detection into their system and capitalize on the trend.

There are a few anti-fraud solutions on the market that give operators all the necessary information to negotiate a billing agreement with a flash call provider. Once an agreement has been reached, all flash calls coming from this provider will be monetized, much like SMS.

All flash calls not covered by agreements can be blocked automatically. This will help to restore SMS revenues. Once a flash call has been blocked, subscribers will most likely receive an SMS passcode sent as a fallback.

Moreover, modern solutions don’t affect any legitimate traffic because they only block selected ranges. This also helps to prevent revenue loss.

Essentially, the choice of how to handle flash calls comes down to each operator. However, without a powerful anti-fraud solution capable of accurately detecting flash calls in real time, it’s nearly impossible to monetize flash calls effectively and develop a billing strategy.

Dmitry Sumin is the Head of Products at the AB Handshake Corporation. He has more than 15 years of experience in international roaming, interconnect and fraud management. Since graduating from Moscow State University, he has worked for both vendors and network operators in the MVNO and telecommunications market. This piece is exclusive to Broadband Breakfast.

Broadband Breakfast accepts commentary from informed observers of the broadband scene. Please send pieces to commentary@breakfast.media. The views reflected in Expert Opinion pieces do not necessarily reflect the views of Broadband Breakfast and Breakfast Media LLC.

Continue Reading

Expert Opinion

Bjorn Capens: Strong Appetite for Rural Broadband Calls for Next Generation Fiber Technology

The first operator to bring fiber to a community creates a significant barrier to entry for competitors.

Published

on

The author of this Expert Opinion is Björn Capens, Nokia Fixed Networks European Vice President

In July, the Biden-Harris administration announced another $401 million in funding for high-speed Internet access in rural America. This was just the latest in a string of government initiatives aimed at helping close the US digital divide.

These initiatives have been essential for encouraging traditional broadband providers, communities and utility companies to deploy fiber to rural communities, with governments cognizant of the vital role broadband connectivity has in sustaining communities and improving socio-economic opportunities for citizens. 

Yet there is still work to do, even in countries with the most advanced connectivity options. For example, fixed broadband is missing from almost 30 percent of rural American homes, according to Pew Research. It’s similar in Europe where a recent European Commission’s Digital Divide report found that roughly 18 percent of rural citizens can only get broadband speeds of a maximum 30 Mb, a speed which struggles to cope with modern digital behaviors. 

Appetite for high-speed broadband in rural areas is strong

There’s no denying the appetite for high-speed broadband in rural areas. The permanent increase in working from home and the rise of modern agricultural and Industry 4.0 applications mean that there’s an increasingly attractive business case for rural fiber deployments – as the first operator to bring fiber to a community creates a significant barrier to entry for competitors. 

The first consideration, then, for a new rural fiber deployment is which passive optical network technology to use. Gigabit PON seems like an obvious first choice, being a mature and widely deployed technology. 

However, GPON services are a standard offering for nearly every fiber broadband operator. As PON is a shared medium with usually up to 30 users each taking a slice, it’s easy to see how a few Gigabit customers can quickly max out the network, and with the ever-increasing need for speed, it’s widely held that GPON will not be sufficient by about 2025. 

XGS-PON is an already mature technology

The alternative is to use XGS-PON, a more recent, but already mature, flavor of PON with a capacity of 10 Gigabits per second. With the greater capacity, broadband operators can generate higher revenues with more premium-tier residential services as well as lucrative business services. There’s even room for additional services to run alongside business and residential broadband. For example, the same network can carry traffic from four G and five G cells, known as mobile backhaul. That’s either a new revenue opportunity or a cost saving if the operator also runs a mobile network. 

This convergence of different services onto a single PON fiber network is starting to take off, with fiber-to-the-home networks evolving into fiber for everything, where homes, businesses, industries, smart cities, mobile cells and more are all running on the same infrastructure. This makes the business case even stronger. 

Whether choosing GPON or XGS-PON, the biggest cost contributor is the same for both: deploying fiber outside the plant. Therefore, the increased cost of XGS-PON over GPON is far outweighed by the capacity increase it brings, making XGS-PON the clear choice for a brand-new fiber deployment. XGS-PON protects this investment for longer as its higher capacity makes it harder for new entrants to offer a superior service. 

It also doesn’t need to be upgraded for many years, and when it comes to the business case for fiber, it pays to take a long-term view. Fiber optic cable has a shelf-life of 75 or more years, and even as one increases the speeds running on fiber, that cable can remain the same.  

Notwithstanding these arguments, fiber still comes at a cost, and operators need to carefully manage those costs in order to maximize returns. 

Recent advances in fiber technology allow operators to take a pragmatic approach to their rollouts. In the past, each port on a PON server blade could only deliver one technology. But Multi-PON has multiple modes: only GPON, only XGS-PON or both together. It even has a forward-looking 25G PON mode. 

This allows an operator to easily boost speeds as needed with minimal effort and additional investment. GPON could be the starting point for fiber-to-the-home services, XGS-PON could be added for business services, or even a move to 25G PON for a cluster of rural power users, like factories and modern warehouses – creating a seamless, future-proof upgrade path for operators. 

The decision not to invest in fiber presents a substantial business risk

Alternatively, there’s always the option for a broadband operator to stick with basic broadband in rural areas and not invest in fiber. But that actually presents a business risk, as any competitor that decides to deploy fiber will inevitably carve out a chunk of the customer base for themselves. 

Besides, most operators are not purely profit-driven; they too recognize that prolonging the current situation in underserved communities is not great. High-speed broadband makes areas more attractive for businesses, creating more jobs and stemming population flows from rural to urban centers. 

But rural broadband not only improves lives, but it also decreases the world’s carbon emissions both directly, compared to alternative broadband technologies, and indirectly by enabling online and remote activities that would otherwise involve transportation. These social and economic benefits of fiber are highly regarded by investors and stockholders who have corporate social responsibility high on their agendas. 

With the uber-connected urban world able to adopt every new wave of bandwidth-hungry application – think virtual reality headsets and the metaverse – rural communities are actually going backwards in comparison. The way forward is fiber and XGS-PON. 

Björn Capens is Nokia Fixed Networks European Vice President. Since 2017, Capens has been leading Nokia’s fixed networks business, headquartered in Antwerp, Belgium. He has more than 20 years of experience in the fixed broadband access industry and holds a Master’s degree in Electrical Engineering, Telecommunications, from KU Leuven. This piece is exclusive to Broadband Breakfast.

Broadband Breakfast accepts commentary from informed observers of the broadband scene. Please send pieces to commentary@breakfast.media. The views reflected in Expert Opinion pieces do not necessarily reflect the views of Broadband Breakfast and Breakfast Media LLC.

Continue Reading

Expert Opinion

Johnny Kampis: Federal Bureaucracy an Impediment to Broadband on Tribal Lands

18% of people living on Tribal lands lack broadband access, compared to 4% of residents in non-tribal areas.

Published

on

The author of this expert Opinion is Johnny Kampis, director of telecom policy for the Taxpayers Protection Alliance

A new study from the Phoenix Center finds that as the federal government pours tens of billions of dollars into shrinking the digital divide in tribal areas, much of that gap has already been eliminated.

The report, and a second from the U.S. Government Accountability Office, are more indications that regulations and economic factors that include income levels continue to hamper efforts to get broadband to all Americans.

The Infrastructure Investment and Jobs Act of 2021 allocated $45 billion toward tribal lands. This was done as part of a massive effort by the federal government to extend broadband infrastructure to unserved and underserved areas of the United States.

George Ford, chief economist at the Phoenix Center for Advanced Legal & Economic Public Policy Studies, wrote in the recent policy bulletin that while there is still plenty of work needed to be done in terms of connectivity, efforts in recent years have largely eliminated the broadband gap between tribal and non-tribal areas.

Ford examined broadband deployment around the U.S. between 2014 and 2020 using Form 477 data from the Federal Communications Commission, comparing tribal and non-tribal census tracts.

Ford points out in the bulletin that the FCC has observed several challenges for broadband deployment in tribal areas, including rugged terrain, complex permitting processes, jurisdictional issues, a higher ratio of residences to business customers, higher poverty rates, and cultural and language barriers.

Ford controlled for some of these differences in his study comparing tribal and non-tribal areas. He reports in the bulletin that the statistics suggest nearly equal treatment in high-speed internet development.

Encouraging results about availability of broadband in Tribal areas

“These results are encouraging, suggesting that broadband availability in Tribal areas is becoming closer or equal to non-Tribal areas over time, and that any broadband gap is largely the result of economic characteristics and not the disparate treatment of Tribal areas,” Ford wrote.

But he also notes that unconditioned differences show a 10-percentage point spread in availability in tribal areas, which indicates how much poverty, low population density, and red tape is harming the efforts to close the digital divide there.

“These results do not imply that broadband is ubiquitous in either Tribal or non-Tribal areas; instead, these results simply demonstrate that the difference in availability between Tribal and non-Tribal areas is shrinking and that this difference is mostly explained by a few demographic characteristics,” Ford wrote.

In a recent report, the GAO suggests that part of the problem lies with the federal bureaucracy – that “tribes have struggled to identify which federal program meets their needs and have had difficulty navigating complex application processes.”

GAO states that 18 percent of people living on tribal lands lack broadband access, compared to 4 percent of residents in non-tribal areas.

The GAO recommended that the Executive Office of the President specifically address tribal needs within a national broadband strategy and that the Department of Commerce create a framework within the American Broadband Initiative for addressing tribal issues.

“The Executive Office of the President did not agree or disagree with our recommendation but highlighted the importance of tribal engagement in developing a strategy,” the report notes.

That goes together with the GAO’s dig at the overall lack of a national broadband strategy by the Biden Administration in a June report. As the Taxpayers Protection Alliance reported, the federal auditor noted that 15 federal agencies administer more than 100 different broadband funding programs, and that despite a taxpayer investment of $44 billion from 2015 through 2020, “millions of Americans still lack broadband, and communities with limited resources may be most affected by fragmentation.”

President Biden has set a goal for universal broadband access in the U.S. by 2030. These recent reports show that the federal bureaucracy under his watch needs to do a better job of getting out of its own way.

Johnny Kampis is the director of telecom policy for the Taxpayers Protection Alliance. This piece is exclusive to Broadband Breakfast.

Broadband Breakfast accepts commentary from informed observers of the broadband scene. Please send pieces to commentary@breakfast.media. The views reflected in Expert Opinion pieces do not necessarily reflect the views of Broadband Breakfast and Breakfast Media LLC.

Continue Reading

Signup for Broadband Breakfast

Get twice-weekly Breakfast Media news alerts.
* = required field

Broadband Breakfast Research Partner

Trending