-

Fall-Winter 2020

Scroll To Read Magazine

In This Issue

 -
5G

From the Editor

Not too long ago this administration came up with the brilliant idea that the government should be t...
 -
Privacy

Keeping the IoT Smart and Secure

By the end of 2020, there will be 1.91 billion Internet of Things connections. Securing these connec...
 -
Artificial Intelligence

Beyond AI, Ambient Intelligence is on the Horizon

Ambient Intelligence (AmI) – one of those terms that, unless you are close to the topic, sounds like...
 -
Security

Vulnerabilities in GTP Threaten Mobile Operators and Subscribers

Earlier, Positive Technologies described how SS7, the most mature mobile roaming protocol, in terms ...
 -
Technology Fundamentals

What Is Edge Computing?

Edge computing and donuts have one thing in common: the closer they are to the consumer, the better....
 -
5G Antenna Technology

How 4G and 5G Antennas Really Work

Picture yourself in an open space – a meadow if you will – on a quiet sunny day. In front of you, 30...
 -
5G Environment

It’s About Time for 5G – And About Giving Radios Precise Local Clock Sources Even in Harsh Environments

There are a number of technological hurdles the industry faces while preparing for 5G. One of the mo...
 -
5G

Market Disparities: Building A Strategy for Digital Edge Empowerment

Robust, reliable, and efficient access to the Internet — and to the content and resources it plays h...
 -
5G

Wireless Traffic Forecasts: 5G Will Make Little Difference to Long-Term Trends

Analysys Mason’s Wireless network data traffic: worldwide trends and forecasts 2020–2025 is the firs...
 -
Case Study

Dublin, Ohio, Embarks on Smart City Journey

A smart city is not built in a day. The road to a smarter future involves careful planning, a strate...
 -
Case Study mmWave

Winning in 5G with Rapid Characterization of Evolving Antenna Designs

In multiple product categories, the race is on to be first to market with 5G devices. Ultimately, th...
 - Image Source: www.seeclearfield.com
5G

Simplifying Fiber Deployments for 5G

One of the biggest topics of discussion regarding 5G is the need for fiber. Due to latency and bandw...
 -
5G

Quality Mobile Connectivity for Rural America

According to a 2019 report by Pew Research Center titled Digital Gap Between Rural and Nonrural Amer...
 -
5G Artificial Intelligence Internet of Behaviors Security

Trends

As we are all aware, the pandemic has reordered the world. Trends that were once outside of the wire...
Thought Leader Forum
5G

Simplifying Fiber Deployments for 5G

Chief Marketing Officer, Clearfield

5G

Quality Mobile Connectivity for Rural America

Intelsat Senior Principal Product Marketing Manager, Mobile Network Operators

 -
5G

From the Editor

Some ideas are just too ridiculous to entertain.

Not too long ago this administration came up with the brilliant idea that the government should be the one to run Americas’ 5G network. The idea was floated by the Department of Defense (DoD), which issued a Request for Information (RFI) with the idea in mind of a government-owned and operated 5G network, using the mid-band spectrum currently used for naval operations. And, of course, the White House justification, as is with everything, is national security.

There are some additional topics in the request. One is around sharing dynamic spectrum sharing (DSS), the other is if they should utilize multiple spectrum sharing solutions, including leasing arrangements.

In fact, the Sinophobes in the National Security Council go as far as to say that “extraordinary efforts to counter the growing economic and political threat from China’s aggressive efforts to develop 5G are necessary.”

The Department of Defense’s RFI on the creation of a government-owned and operated 5G network will do nothing but slow the deployment of this critical technology.

Obviously, there is a national imperative for this prime 5G spectrum. However, most agree that nothing good (and much bad) that would come from the government owning America’s 5G network. And just everybody I know, in the biz, thinks it is a bad idea. And they are not alone. Even those in government think it is a bad idea. To wit, and according to Reps. Frank Pallone (D-N.J.) and Mike Doyle (D-Pa.), “The Department of Defense’s RFI on the creation of a government-owned and operated 5G network will do nothing but slow the deployment of this critical technology. The plan appears specifically crafted to enrich President Trump’s cronies and undermines the careful and complicated work done by the FCC and the NTIA to allocate this spectrum for commercial use.”

While I like some of the FCC regulators, personally, I am not 100 percent convinced the FCC is all in on the best possible use of spectrum, even though it is supposed to be independent of government control. As much as one would like to think the FCC is an independent agency, it too is mired in politics.

One has only to look at what happened to Commissioner O’Reilly when he took a position Trump did not agree with and Trump fired him. As well, many of the decisions made by the FCC are politically motivated (perhaps that is too strong of a word. Let us use “biased” instead).

The idea of the government owning and selling 5G spectrum is nothing short of ludicrous! There is very little the federal government gets right. The waste of money and resources has been publicized over and over. As well, political party preferences course through the veins of every Congressional member as they play for pay from special interests.

The latest nonsense is the White House suggesting that the DoD partner with Rivada Networks (a company in which prominent Republicans and supporters of President Donald Trump have investments including GOP strategist and lobbyist for Rivada Karl Rove, who is also a significant investor and lobbyist for Rivada). They want to develop a 5G network using part of the mid-band spectrum the Department of Defense (DoD) holds (without a competitive bidding process, of course).

In response, Rivada claims that it would not be in competition with, or want to be part of, the national 5G network. Their main claim is that they would provide private and existing 5G network providers with additional 5G capacity if they needed it. Of course, what would one expect them to say.

I have no problem with the DoD and others holding spectrum. If they want to build their own private 5G network with the spectrum they hold, by all means, they can knock themselves out. But to manage what they do not own and use it to create networks bases upon the well-worn national security song does not sit well with me.

The government has a past history, an ongoing presence, and, certainly, expected continuation of using any and all resources, for any number of nefarious activities, from spying to whisking off protestors in black SUVs. To me, I see this play as just another opportunity to do more of the same (MOTS), especially if government and private 5G networks share spectrum – an easy path to spying of one sort or another.

Next, I would not put it past them to offer it for use by entities that can be competitive with the private sector or use it as bait or a reward to companies that are willing to use it to favor government organizations.

An analyst I know, Roger Entner, puts it succinctly with “the established carriers are not going to buy capacity from Rivada unless they are forced to do that, and then we’re another step toward socialism.” As well, such government-controlled spectrum might just be immune to FCC policy.

In other words, this is a bad idea!

In the end, the present makeup of the U.S. government is the most dictatorial of any government to date. They have weaponized so many platforms, wireless being one of them.

As I have said many times, we have already lost the 5G “race.” The last thing we need to do is push this platform so we can say we are a leader in the technology.

It does not matter who the leader is. Nobody is that far behind and the priorities should be in deploying a solid, sustainable, 5G platform, technology, and infrastructure – not about boasting rights which, in the end, do nothing for the bottom line or anything else quantitative, for that matter.

ErnErnest Worthman is the Executive Editor of AWT Magazine.
 -
Privacy

Keeping the IoT Smart and Secure

Assessing the Security, Analytics and Overall Ecosystem Of Smart IoT Gateways

By the end of 2020, there will be 1.91 billion Internet of Things connections. Securing these connections is becoming an increasingly challenging - and critical - function. That is why key IoT vendors are investing significant dollars and hours into research and development related to Smart IoT gateways.

However, IoT gateways are currently caught amid a greater transformative evolution that shifts focus from the cloud to the edge, reversing the investment priorities of the past decade, causing IoT vendors to revisit their market strategies, further enhancing edge capabilities for gateways.

Other Keys to Know

Hardware and software digital security options in IoT gateways are steadily gaining momentum involving increased support for crypto-processes, Internet Protocol Security (IPsec)/Virtual Private Network (VPN) options, Machine Learning (ML)-empowered anti-malware, firewall and Intrusion Detection and Prevention System (IDPS), secure Root of Trust (RoT), and device bootstrapping, among many others.

Increased level of edge processing and data filter-ing for IoT gateways may originate at the silicon and chipset level, along with other critical security operations. However, native support with cloud management plat-forms is still very much part of the equation.

Securing legacy equipment, offering extensive brown-field management services, and providing hardware, software, and platform-agnostic gateway services to ease implementation and increase interoperability and data-driven intelligence. This will streamline the transi-tion of Information Technology (IT) security tools into the Operational Technology (OT) infrastructure at both the gateway and the server levels, providing a much-needed security respite for IoT implementers.

Industrial IoT (IIoT), connected utilities, and smart energy markets will benefit more from the addition of next-generation IoT gateways, allowing a wide range of edge operations and intelligence, but still highly dependent upon overarching cloud services.

Defining Smart IoT Edge Gateways

Smart IoT edge gateways have all the characteristics of router devices, but also encompass a much more extensive range of technological elements, including advanced connectivity support and network man-agement, hardware/embedded and software cyber-security options, processing power, data analytics, intelligent design, multi-tenancy vendor support, advanced management options, Application Pro-gramming Interface (API) design, cloud service inte-grations, higher levels of modularity, and some level of Artificial Intelligence (AI) support, which, on top of some network services, is also related to some form of security automation and orchestration (as part of a larger suite or managed service), network security, anti-malware, or malicious traffic depending on software elements, Operating System (OS), and Software Development Kits (SDKs).

Some organizations believe that segmenting gateway products is wrong and, ultimately, nothing less than a marketing scheme.

The term “smart IoT edge gateways” is used to reflect the current evolutionary trends and designs needed to bring IoT gateways into the future and address the growing IoT deployment, security, and management requirements. They can be referred to as “smart,” “intelligent,” or “next-generation” gateways (or routers, depending on the vendor). Still, some vendors use various descriptions as marketing terms, regardless of actual software or hardware capabilities. Note that other organizations believe that segmenting gateway products is wrong and, ultimately, is nothing less than a marketing scheme, but, perhaps quite ironically, they still use terms like “Artificial Intelligence” and “AI” to describe their own solutions, even though they offer no automation on any level, edge analytics and data filtering are severely lacking, and the ML tools involved are just borderline intelligent (e.g., simple linear regression) or incapable of providing any meaningful insights.

Connectivity and IoT Management Platforms

Communication and Protocol Translation

Connectivity Support: A standard requirement for all gateway/router products is the extended support for a variety of communication pro-tocols and connectivity modules. Tailoring connectivity options to focus only on communication needs for specific verticals or applications will drive down costs. The difference between the connectivity options for standard and the “smart” IoT gateways is the advanced connectivity support, interoperability options, streamlined cloud-edge communication, protocol translation capabilities, support for legacy systems, and some form of data encryption (which might not always be applicable depending on the target application).

These characteristics are addressed on three different levels:

  1. The hardware level with the incorporation of the appropriate connectivity modules that allow communication with each communication protocol;

  2. The gateway software level, which enables multi-protocol support, routing, and protocol translation; and

  3. In some cases, at the network architecture level proposed by leading communication authorities and industry entities.

Protocol Translation, State-of-the-Art Communication, and Interoperability: Next-generation gateways will offer extended support for a wide array of communication protocols coupled with flexible connec-tivity services. This includes protocol translation for both legacy and state-of-the-art protocols, which is of critical importance for gateways operating in the IIoT, critical infrastructure, connected utilities, smart energy, and building automation markets.

IoT Device Management Platforms

An essential component of next-generation IoT gateways is management services, whether localized (gateway-based), on-premises (network server-based), or platform (cloud-based). This is a quintessential characteristic that distinguishes gateways from their older, traditional role of merely routing data traffic between different devices and servers, into their emerging role of extending secure management services to connected device.

Next-generation IoT gateways will need localized, on-premises, or platform management services.

Device management options can be customized according to the implementers’ specifications. It can be simple and straightforward, albeit somewhat insecure, ranging from managing simple credentials and device keys, all the way to more secure uses of digital certificates and complex Public Key Infrastructure (PKI) options. Note that digital certificate management can be achieved internally without a Certificate Authority (CA). This is a more cost-efficient option, but not all organizations can handle the internal management of digital certificates if they lack the neces-sary IT infrastructure or investment in Hardware Security Modules (HSMs) used to generate and manage encryption keys and Key Encryption Keys (KEKs).

HARDWARE SECURITY AND EDGE COMPUTING

Cryptography and Encryption Key Management

Hardware Security and Ability to Safeguard ID Credentials: Smart IoT edge gateways usually require some embedded hardware security with a secure enclave or isolated environment (e.g., Trusted Platform Module (TPM), Trusted Execution Environment (TEE), and System-on-Chip (SoC)). This allows safe storage or high-value data and applications, as well as encryption keys and digital certificates used in IoT device management. This includes management of the gateway itself, but, in some cases, also management for all adjacent devices, depending on implementer pa-rameters and deployment requirements.

Key Considerations for PKI and Encryption Vendors at the Gateway Level: Making use of PKI in the IoT is quite challenging and must also be addressed at the gateway level. Key considerations include the following:

  • The use of embedded hardware security elements, especially TPMs for higher-end devices that can support crypto-functions

  • Assessing the quality and entropy level of those cryptographic elements (i.e., the ability to run certain functions within a tolerant level of entropy)

  • Storage and processing requirements so as not to overburden other applications or processing operations (especially crucial in IIoT and IT/OT implementations)

  • Ability to integrate with certain brownfield and legacy components, as well as greenfield devices

  • Hardware and software security options that can support key functions depending on targeted IoT applications (e.g., industrial protocol conversion, digital certificate rotation, multi-containerization, IDPS, etc.)

  • Ability to obtain a level of quality of service for cloud management platforms

  • Ability to provide interoperable services for said cloud services

  • Provide control for new security operations that extend beyond network segmentation like data exfiltration protection and Data-Loss Prevention (DLP) at the edge

  • Support all standard next-generation gateway options for data analytics (which should accompany DLP options), filter, and aggregation, effectively making the transition from the cloud to the edge

Advanced Edge Capabilties

Processing, Data Filtering, Bandwidth Capacity, Real-Time Operations, and the Cross-Vertical Value Proposition: Next-generation hardware capabilities must also include advanced edge processing power. Edge processing is not solely used to expand computing power and hasten software operations. It also extends into several key applications that deal with high-volume and potentially high-quality data traffic. The smart gateway transition into advanced edge processing serves various purposes, with the primary being decreased bandwidth capacity, intelligence efficiency, real-time operations, and cross-vertical implementations.

Increasing Efficiency of IoT Intelligence and Analytics: Because most data harnessed at the edge is not particularly useful for implementers, it makes little sense to spend additional resources and upload every piece of data only to be discarded again by implementers or cloud operators. Data filtering and data aggregation at the edge can help sort, manage, discard, and aggregate only the high-value data required according to implementers’ specifications, thus boosting intelligence efficiency.

Streamlining Real-Time Operations: Increased processing power
at the edge, coupled with fewer bandwidth restrictions, data aggre-gation, and intelligence efficiency enables real-time operations to run more effectively. Streamlined real-time analytics and intelligence open an entirely new world for the IoT, allowing for precise management of critical or high-value applications, while also boasting a new value proposition for IoT security operations.

Software Secuirty and Virtualization

Modular OSS and Security Options

Modular OSs and SDKs: A key element in any smart IoT edge gateway is the presence of a secure and customizable OS to work as a stable platform, allowing communication between end devices and cloud services, and the protected use of applications. The use of a flexible SDK from gateway vendors is always a welcome sight for implement-ers. While the use of open-source software tools is not always the best choice security-wise, the Linux-based OS has become quite common. Its merit as a flexible and customizable software toolset is almost unmatched, prompting many gateway software developers to base their products on Linux kernels. This is especially true for monolithic Linux kernels, which come with already added device drivers, direct hardware communication, and application multitasking. Although security might be somewhat lacking in monolithic kernels, they are designed for devices with a higher digital footprint.

Advanced Security Options: Smart IoT edge gateways are also expected to have a greatly expanded security arsenal at their disposal. These options are highly dependent on the target application and should not be part of the gateways’ mandatory design because that would increase the cost considerably.

Firmware Updates - Security Capabilities Depend on the Connectivity Options On Which They Are Built: The network architecture and communication requirements for IoT deployments may very well be the deciding factor in any IoT implementation because analytics, management, and security capabilities depend on said appli-cation’s connectivity options. Firmware updates, cryptographic process-es, managed security services, device life cycle management, and many cybersecurity endeavors must be enabled on top of the communication options on which they are built and the vertical or application at hand. One of the most crucial security operations for smart IoT edge gateways is the ability to perform firmware updates in a timely, secure, and reliable manner, which, in turn, frames many further options related to connectivity and security.

Figure 1 Figure 1. Advanced security options for IoT gateways

Breaking Down the Market

Aided by the influx of new Internet Protocol (IP) devices and the upheaval of new IoT integrations across all market spectrums, IoT gateways are set to experience significant growth over the next 5 years. As shown in Table 1, IoT gateway shipments are expected to increase from 102 million in 2020 to 169.2 million in 2025, at a 70% increase. Smart IoT gateway shipments will increase from 8.5 million in 2020 by a factor of 3.5 to 21.4 million in 2025, with an impressive 20% Compound Annual Growth Rate (CAGR).

Table 1Table1. IoT gateway shipments versus smart IoT gateway shipments, world markets, forecast: 2018 to 2025

Examining the Penetration Rate for the “Smarter” Components: The penetration rate of the IoT gateways featuring the more advanced “smart components” is expected to increase from 8.3% in 2020 to 12.6% in 2025. While this percentage may appear relatively small, it is a quite potent predictor for the future evolution of the IoT in its entirety because almost every vital piece of technological evolution that con-cerns the IoT (from embedded security to software and cloud security, cellular protection, and intelligence operations) has some aspect reflect-ed on the gateway device itself (using of native cloud support, encryp-tion, device management, OS, SDK, etc.).

What Do the Data Suggest? From the perspective of IoT connectivity and, perhaps more importantly, from the perspective of digital security, the data suggest that IoT players can at least expect some level of so-phistication and intelligence operations at the edge aided by IoT gate-ways. Unfortunately, the industry has certainly not reached the threshold required for truly secure, massive IoT integrations. With the fervent in-crease of IoT connections, which ABI Research forecasts will reach 20 bil-lion by 2025, a mere 169.2 million IoT gateways (not to mention only the fraction of 21.4 million of their “smarter” versions) is not nearly enough to safeguard future IoT ecosystems through edge-based security.

Take a Deeper Dive Into Digital Security

Since 1990, ABI Research has partnered with hundreds of leading technology brands, cutting-edge companies, forward-thinking government agencies, and innovative trade groups around the world. ABI Research’s lead-ing-edge research and worldwide team of analysts deliver actionable insights and strategic guidance on the transformative technologies that are reshaping industries, economies, and workforces today.

ABI Research’s Digital Security service offers end-to-end coverage of the digital security eco-system – from information and communication technologies to the operational control process. This research is particularly salient to enterprises facing the growing proliferation of cyber threats, while also becoming increasingly connected, as in the convergence of IT and OT.

Dimitrios Pavlakis Dimitrios Pavlakis, Industry Analyst at ABI Research is responsible for digital, biometrics, and IoT security research including cybersecurity, machine learning, and artificial intelligence with a focus on a wide spectrum on enterprise, consumer, and governmental verticals. He closely studies related markets, products, technologies, and applications from a hardware (devices, sensors, etc.), software (algorithm design, data extraction, security, etc.), and consumer (mentality, adoption, etc.) perspective.
 -
Artificial Intelligence

Beyond AI, Ambient Intelligence is on the Horizon

The convergence of Artificial Intelligence, ambient connectivity and the IoX, will create the world of Ambient Intelligence – Artificial Intelligence 2.0, if you will.

Ambient Intelligence (AmI) – one of those terms that, unless you are close to the topic, sounds like something out of the 1960s. But today, because of the micro-scale of available technology, it is poised to become a fundamental platform for the Internet of Anything/Everything (IoX) and smart “X.” It has implications for 5G, but will not likely emerge, measurably until 6G, redefining intelligent communications.

The vision of an AmI future sees us surrounded by intelligent electronic environments, responsive and sensitive, and to our desires, requirements, and needs. Ubiquitous sensors will be embedded in every nook and cranny of our world. Predictions abound that AmI will be heavily populated by gadgets and systems that are capable of powerful capabilities nano- bioinformation and communication technology (NBIC).

AmI is already a reality. But it is not necessarily or systematically “connected.” Its foundations have been laid and new technological building blocks will soon be added to solidify them – led by 5G, which will make it possible to connect together, devices as dense as one million objects per square kilometer! And, these devices and the intelligent interconnect will be controlled by AmI.

AmI was born in 1998 from a vision by Royal Philips, of The Netherlands. It was the brainchild of a consortium of individuals, including Eli Zelkha and Brian Epstein of Palo Alto Ventures who, with Simon Birrell, coined the name ‘Ambient Intelligence’. It is defined as “envisioning a world where homes will have a distributed intelligent network of devices that provide us with information, communication, and entertainment.”

While the smart home was the original vision of AmI, today, AmI has “left the house” for a much more ubiquitous positioning, thanks to Internet dust (See a post I wrote a few years ago.).

AmI has evolved into a vision of how people interact with technology - everywhere. A seamless environment of computing, advanced networking technology such as Internet dust and intelligent interfaces. It is aware of the specific characteristics of human presence and personalities. It takes care of needs and is capable of intelligently responding to spoken or gestured indications of desire, and even can engage in intelligent dialogue (although this will take a while to develop to a level that is realistic).

Internet Dust

Because sensor, and circuit technology has come such a long way, interface devices for the IoX have become tiny – around one square mm, some even smaller. And power for them can be supplied by a number of platforms, batteries, solar, direct connect, even energy harvesting. That means that they can be fitted to virtually any product, and for any application.

It is reasonable to expect that, soon, everything that can be created; wearables, currency, appliances, vehicles, the paint on our walls and the carpets on our floors, and even some things that cannot (air, water?), will have some measure of embedded intelligence. Expect that networks of tiny sensors and actuators, which some have termed “smart dust,” will be prolific.

However, as usual, how this will play out is still in the visionary stage for some of them. There are issues in all of the key technologies just discussed, that will need to be addressed before a ubiquitous state of AmI exists across all segments.

The AmI Difference

What makes AmI stand out is that it will provide personalized services, largely via big data, on a scale that will dwarf anything we have seen so far. AmI will surround us with intelligent objects that will understand us, to some degree. Because the dust and other objects will continually, and on a real-time basis, feed information to the “cloud” for analysis and tweaking to our particular environment and circumstances. It will also be able to preemptively assess what we are wanting to do, thereby providing a smooth progression of actions that we want to take.

Certainly, there are great visions for AmI. Some may be a bit of a stretch for now, but others can certainly be envisioned. For example, the computing and communications we now have will be interfaced to the sensors and devices on the IoX. The next level of this will be capable of is both recognizing, and responding to the presence of different individuals and entities in a seamless, inconspicuous, and transparent way. This will be accomplished via a continuous loop of actions (see Figure 1) that begins and ends with sensing.

Pole-mount small cell cabinetFigure 1. The flow of data from input to result Artwork by Stephen M Siegal

The number of objects that sensors can attach to is limitless. As well, sensors can be mobile – free-floating or detachable. Examples of sensors:

  • Ambient and wireless

    • Motion (cabinets and drawers, people, animals, bath fixtures, proximity)
    • Atmosphere (fire/smoke, carbon monoxide, light)
    • Appliances and plumbing
    • Locks, temperature, sound detection
  • Wearables

    • Health
    • Exercise
    • Clothing
    • Location
    • Virtual

These are only two of the pervasive list of categories and sub-categories that can have sensors attached to devices and targets.

The Elements

Sensing – The first element that needs to be in place is the sensor. And not just any sensor. With AmI, the network must be able to respond to real-world stimuli. Components must integrate agile agents that perceive and respond intelligently, not simply pick from a database full of scenarios by algorithms (which would not be realistic for dust, or micro-type sensors with limited resources, anyway).

Once the data is captured, intelligent analytics are applied. This is done at a centralized system of one sort or another if the sensor itself is only used to capture, store, and forward data. If the system is distributed, the sensors will have some type of onboard processing power that will preprocess, to whatever degree is designed into the system.

The type of network depends largely on the application. Mobile dust networks, such as those that may be used to monitor a forest fire, will likely just report to the central station. Fixed networks, such as weather sensors, likely will have some local processing power integrated.

In any network that is somewhat ubiquitous, the data set will generally consist of multiple volumes of multi-dimensional temporal or spatial information. Because systems cannot be made 100 percent reliable, the system must be able to discern, intelligently, between non-essential data, erroneous data from a noisy sensor, or inference of some sort. Or there can be missing data from a defective sensor. For example, a sensor fails its data set redundancy check or some segment of it may be incomplete.

This is where big data analysis techniques would be useful. Large volumes of sensor data are collected from disparate sources, and part of it may be erroneous or missing. Synthesizing it to produce accurate and rational results requires new methodologies and models that are now being developed under the big data umbrella. However, today, most sensor data fusion is done with Kalman filters or probabilistic approaches.

One early example of this is the In the MavHome smart home project [1]. Collected motion and lighting information alone results in an average of 10,310 events each day. In this project, a data mining pre-processor identifies common sequential patterns in this data, then uses the patterns to build a hierarchical model of resident behavior.

However it is approached, assessment algorithms must be real-time responsive, adaptive, and have the ability to apply a variety of reasoning types, including recognition, user modeling, activity analysis, decision making, and spatial-temporal reasoning.

Modeling – One of the features that AmI integrates is the ability to differentiate between general computing algorithms and specific ones that can adapt to or learn about the user. Such “learning” systems do exist and are fairly adept at this.

Even so, the problem with these systems is that to do it, with any amount of efficiency, requires a deep well of hardware and software resources. That works in many cases and will work in AmI cases with sufficient resources.

However, agile systems envisioned in AmI will need to be able to do this, efficiently and accurately, in a small form factor, with the ability to refine and adapt itself on the fly.

The volume of data generated by sensors can challenge modeling algorithms. Adding audio and visual data into the model increases the data quantity by, at least, an order of magnitude. It also adds another dimension of sensed data. For example, video data can be used to find intertransaction (sequential) data in observed behavior or actions, which is useful in identifying and predicting errant conditions in an intelligent environment.

One of the most promising applications in AmI is identifying social interactions, especially with the proliferation of social networking technologies. This has broad implications, all the way from predictive crowd behavior to corporate meeting environments. It can also be a tool in determining the state of such data (supported, hearsay, false, or manufactured) for social media platforms, as has been the case recently.

Prediction and Recognition – Prediction and recognition are, perhaps, the two most key elements of reasoning In AmI environments. Prediction is accomplished by attestation, from which comes intelligence. Intelligence begets recognition, which is used in prediction. Theoretically, sufficient reiterations of this cycle will increase the intelligence within the networks to near-human capability.

This has huge implications in the medical space. AmI can, literally, be a watchdog for patients with dementia or physical impairment.

For example, in theoretical AmI models, such as the Neural Network House, the networks use prediction and recognition to control home environments. This is accomplished, on the fly, by predicting the location, routes, and activities of the residents, based on previous recognition as well as prediction by machine learning.

A number of prediction algorithms have been developed that can predict activities for single, as well as some multiple resident cases. These algorithms are relatively adept at predicting resident locations, even some resident actions. The AmI network can, with a reasonable degree of accuracy, anticipate the resident's needs and even assist, or automate performing the action.

This has huge implications in the medical space. AmI can, literally, be a watchdog for patients with dementia or physical impairment. Similarly, for injury and surgery cases as well as other situations.

Decision Making – Part of the AmI platform is AI and deep learning. Neural networks are a key element in the decision-making process. Temporal reasoning can be implemented in conjunction with rule-based algorithms to perform any number of functions; from identifying safety concerns to analyzing medical data and adjusting medications, to diet planning based upon wearable sensor data, to environmental comfort settings.

Temporal and Spatial Components; The Support Elements – Spatial and temporal reasoning are also crucial elements of AmI. There is a wide collection of algorithms that have been developed and honed to deal with the various segments of spatial, temporal, and spatio-temporal reasoning. Such algorithms are another element of the network that allows AmI to understand the activities in an AmI application.

Any intelligent system relies on either explicit or implicit reference points of where and when the events of interest occur. For any network to be able to decide on actions, preemptively, or in real-time, an awareness of what the targets are is essential.

This is where space and time come into the equation. For example, assume a situation is developing where someone left a stove burner on and the temperature around the stove rises. In this scenario, time and temperature have to have a correlation to assess the situation, relative to rate and rise of heat vs. time, location, and perhaps even air quality. The network has to understand that this condition is different than, say, the heat coming on, which may produce a similar condition if there is a heating duct near the stove.

Missive

There are, of course, many more elements to AmI, but space and time limit what can be discussed in a paper of this type.

One of the issues that have prevented large-scale development in fields such as neural networks, AI, and AmI, is the tremendous processing power required to develop such “intelligence.”

However, the evolving state of technology is about to change all of that. Semiconductor technology is finally crossing the thresholds of capacity, performance, size, and integration. The new technologies in chip configuration and systems, particularly quantum computing, will see tremendous achievements in technology to support AI, AmI, the Internet, and Smart everything as well as peripheral and parallel segments, industries, and vectors that go with it.

It is, indeed, an exciting time to stand at these thresholds.

[1] Wikipedia, et al.

Ernest Worthman Ernest is the Executive Editor of AWT Magazine.
 -
Security

Vulnerabilities in GTP Threaten Mobile Operators and Subscribers

Earlier, Positive Technologies described how SS7, the most mature mobile roaming protocol, in terms of security investment, was releasing subscribers’ IMSI. The IMSI is the key identifier needed by hackers for complex attacks on all mobile protocols – on an incredible 93 percent of all networks tested. Many had firewalls or SMS home routing platforms deployed to combat this issue. But three-quarters of all firewalls could be evaded, and over half of SMS home routing systems circumvented.

This perfectly shows how, when challenged by the closure of protocol vulnerabilities, hackers can adapt and evolve, and use bypass techniques just like they do in the general IT world.

Positive Technologies has now released the third in a series of papers covering telecom security vulnerabilities. All three point to a worrying level of inertia in security progress, with the gradual security improvements observed in previous years either slowing or even reversing.

GTP, a protocol that has been used since mobile devices connected to the Internet to transmit user data and control traffic, is similarly vulnerable: Spoofing, fraud, denial-of-service and other attacks are possible on almost every network.

During the intermediate phase of non-standalone 5G, it will continue to be used in much the same way. Only when this evolution is complete will GTP’s role change.

...hackers can adapt and evolve...

A fundamental flaw undermining GTP protocol is common with other roaming protocols: Verification that the user can actually be in the location the message originates from. Is it possible the subscriber can now be in Paris if he was just in Berlin? You need dynamic tables of locations and geographic distances to be sure. An added complexity in GTP is that this can only be achieved using another protocol, as there is no facility on GTP itself. During our GTP testing, this feature was almost without exception, missing.

Another weakness allows a hacker to send messages with subscriber credentials appearing to be verified by the SGSN/SGW; this allows the hacker to avoid some network checks and convince operators’ network equipment to trust these bogus packets. This was exacerbated by a lack of simple IMSI verification to check whether the originating party should be allowed to contact the subscriber.

Possibly the most concerning of all saw successful GTP-in-GTP attacks, a decades-old method, which is easily fixed by configuration.

This attack consists of sending a GTP command message in the user plane. On some occasions, the network node will see the command messages and, instead of ignoring them, helpfully recognize, extract and act on the requests. The biggest problem is that this expands security borders to include any rooted mobile phone or other device on your network.

Similar configuration issues released users’ GTP Tunnel identifiers (TEID), which can be used to redirect subscriber data to the hacker, possibly leading to man-in-the-middle attacks. This should be no surprise, as configuration accounts for around a third of vulnerabilities across all audits and multiple industries. This is why significantly improving security can be inexpensive.

One consequence of these issues, fraud, was possible in every tested network. The researchers could use service at the expense of other subscribers and/or the operator. Using a compromised identifier of a subscriber, the customer is charged. However, they could also use a non-existent identifier, thereby defrauding the operator. Additionally, security specialists could escalate frauds to access operator services or possibly third-party services.

For simplicity, some services have pass-through authentication from the operator, where the network believes the SIM is authenticated. This gives automatic access to self-service portals to check balances, for instance. This could potentially cause GDPR issues for the operator—private information gets released, and huge regulatory fines are incurred.

The impact on any third-party services accessed in a similar way is difficult to quantify.

Beyond fraud, these impersonation techniques could be used by hackers to cover their tracks for other attacks aimed at governments or large institutions. To avoid security teams tracing them, hackers often use botnets or other techniques. Using GTP, the origination will be the pool of IP addresses of the operator, which on further investigation will be associated with an unsuspecting subscriber or non-existent account.

Every operator was susceptible to Denial of Service (DoS). This is particularly important, as IoT -- the primary driver for 5G -- requires resilience. It’s imperative that the pressure gauge immediately informs you the oil pipeline has a problem.

Our testing showed we could exhaust network resources, costing legitimate users and IoT devices their Internet access. This was possible using real or non-existent identifiers and could also be possible GTP-in-GTP, opening the threat internally, perhaps by malware on a simple IoT device.

Overall, the threat to telecom security is very real and expanding with the consolidation of technologies from wider IT in 5G. Securing the existing foundations is imperative before we move onto the next steps.

Jimmy Jones Jimmy Jones is a telecom cybersecurity expert at Positive Technologies, a global cybersecurity company that has pioneered research into telecoms security, discovering over 50 methods for exploiting telecoms vulnerabilities and dozens of zero-day flaws in telecoms systems.
 -
Technology Fundamentals

What Is Edge Computing?

Edge computing is the concept of capturing and processing data as close to the source of the data as possible via processors equipped with AI software.

Edge computing and donuts have one thing in common: the closer they are to the consumer, the better. A trip to the corner donut shop may take a bit, but a box of donuts within reach is instant gratification.

The same holds true for edge computing. Send data to an AI application running in the cloud, and it delays answers. Process that data on an edge device, and it is like grabbing directly from that pink box of glazed raised and rainbow sprinkles.

Edge computing — a decades-old term — is the concept of capturing and processing data as close to the source of the data as possible via processors equipped with AI software. Because edge computing processes data locally — on the “edge” of a network, instead of in the cloud or a data center — it minimizes latency and bandwidth needs, allowing for real-time feedback and decision-making by autonomous machines.

Frequently, the processors are in the form of intelligent sensors embedded in Internet of Things devices. These sensors could be on heavy machinery in a factory, processing data from the machines and alerting supervisors when malfunctions could result in an accident.

Businesses often place edge servers in close proximity to the sensors, usually in a server room or closet within a store, hospital, or warehouse.

The always-on, instantaneous feedback that edge computing offers is especially critical for applications such as autonomous vehicles, where saving even milliseconds of data processing and response times can be key to avoiding accidents. Instantaneous feedback at the edge is also important in hospitals, where doctors rely on accurate, real-time data to treat their patients.

Edge computing is everywhere — used in everything from retail stores for smart self-checkout, to warehouses where it assists with supply-chain logistics and quality inspections.

Why Is Edge Computing Needed?

By 2025, it is estimated that 150 billion machine sensors and IoT devices will stream continuous data that will need to be processed. These sensors are on all the time — monitoring, picking up data, reasoning about what they are sensing and taking action.

Edge computing processes this data at the source, reducing latency, or the need to wait for data to be sent from a network to the cloud or core data center for further processing, allowing businesses to gain real-time or faster insights.

The surge of data used in these compute-intensive workloads that require high efficiency and speed in data collection and analysis is demanding high-performance edge computing to deploy AI.

Moreover, emerging technologies such as the unveiling of 5G networks, which are expected to clock in 10x faster than 4G, only increase the possibilities for AI-enabled services, requiring further acceleration of edge computing.

How Does Edge Computing Work?

Edge computing works by processing data as close to the source or end user as possible. It keeps data, applications, and computing power away from a centralized network or data center.

Data centers are centralized servers often situated where real estate and power is less expensive. Even on the zippiest fiber optic networks, data cannot travel faster than the speed of light. This physical distance between data and data centers causes latency. By bringing computing to the edge or closer to the source of data, edge computing reduces the issue surrounding latency.

Image source: www.blogs.nvidia.com

Edge computing can be run at multiple network nodes to literally close the distance between where data is collected and processed to reduce bottlenecks and accelerate applications.

At the periphery of networks, billions of IoT and mobile devices operate on small, embedded processors, which are ideal for basic applications like video.

That would be just fine if industries and municipalities across the world today were not applying AI to data from IoT devices. But they are.

By using edge AI, a device would not need to be connected to the internet at all times. Instead, a device could process data and make decisions independently without a connection.

For example, an edge AI application on a microprocessor in a robot could process data from the robot in real time and store results locally on the device. After some time, the robot could connect to the internet and send specific data to the cloud for storage or further processing. If the robot was not operating on the edge, it would continuously stream data to the cloud (taxing its batteries), take longer to process data, and require a constant internet connection.

What Are the Benefits of Edge Computing?

The shift to edge computing offers businesses new opportunities to glean insights from their large datasets. The four main benefits of edge computing are:

  • Reduced latency: Bringing AI computing to where data is generated, rather than collecting and uploading data to a centralized data center or cloud, reduces latency.

  • Improved security: As edge computing allows for data to be processed locally, the need to send sensitive data to the public cloud is decreased.

  • Lowered expenses: Creating more and more data increases bandwidth and data storage costs. Using edge computing and local data processing means less data needs to be sent to the cloud.

  • Greater range: Internet access is required for traditional cloud computing. But edge computing processes data without internet access, extending its range to previously inaccessible remote locations.

Edge Computing vs. Cloud Computing vs. Fog Computing

These methods of computing are often used and mentioned together for strengthened computing power. However, they are distinctly different:

  • Cloud computing uses a network of remote servers hosted on the internet
  • Edge computing uses the edge of a device or server
  • Fog computing uses the local area network (LAN) of network architecture

In recent years, cloud computing has been the preferred processing method due to its capacity, elasticity, and ability to store and process data without physical hardware.

But cloud computing is limited by the speed of light and internet bandwidth. As more businesses deploy AI within their offerings, the demand for faster and more reliable data increases, putting a strain on cloud computing’s networking bandwidth.

To lighten this strain, edge computing has been incorporated into many IoT devices for faster data processing and response times.

Similar to edge computing is fog computing. The difference is that while edge computing processes data at the network edge, fog computing processes data on a device’s local area network. Its strength lies in its ability to process more data than edge computing, but it is limited to its physical connection to devices in the LAN.

Edge Computing: IoT and 5G

Edge computing plays a critical role in the recent advancements in technologies such as the 5G network and IoT applications.

Edge and IoT

With the flood of data coming from IoT devices, manufacturers have realized both the financial and operational benefits of processing data at the edge. With edge computing, IoT devices and sensors can operate with reduced latency and less dependence on the cloud for costly data storage and processing.

For example, with the Metropolis platform for intelligent video analytics, data from trillions of sensors and IoT devices can be analyzed in real time. This can provide actionable insights for applications such as public services for anomaly detection and disaster response, logistics for supply forecasting, and traffic management for incident detection and traffic light optimization.

For an effective disaster response, acting in a timely manner is crucial. By implementing an edge platform like Metropolis, instantaneous and constant data on the location of personnel, vehicles and equipment needed for first responder efforts is available to help ensure the safety of citizens. Moreover, by implementing edge computing that gathers data from IoT sensors and devices and not through cellular networks or internet connection, a more reliable and efficient disaster response plan is possible with the potential to save lives.

Edge and 5G

The amount of data being generated at the edge is growing exponentially and with the rollout of 5G infrastructure, new breeds of applications are emerging.

While AI is enabling insights from mass data, these applications will rely on 5G’s fast bandwidth, low latency, and reliability to provide access to that data.

With the rollout of 5G, a wide portfolio of services is emerging to run AI workloads at the edge and make real-time analysis possible. These range from remotely controlling equipment and machines with cameras and other sensors to using cameras to improve site security and operational safety — all while supporting billions of media-rich devices that will collectively consume and produce zettabytes of data.

Edge computing is critical for such technological innovations and is the only way to meet the latency requirements needed for 5G to operate. It also helps to virtualize multi-tenant 5G edge nodes securely and efficiently, such as 5G.

Four Edge Computing Examples

Not only does edge computing reduce latency, but it also provides end-users with better, more seamless experiences. Here are a few examples of edge applications across multiple industries.

Image source: www.blogs.nvidia.com

Edge Computing for Retailers

The world’s largest retailers are enlisting edge AI to become smart retailers. Intelligent video analytics, AI-powered inventory management, and customer and store analytics together offer improved margins and the opportunity to deliver better customer experiences.

For example, using the advanced EGX platform, Walmart is able to compute in real time more than 1.6 terabytes of data generated a second. It can use AI for a wide variety of tasks, such as automatically alerting associates to restock shelves, retrieve shopping carts or open up new checkout lanes.

Connected cameras numbering in the hundreds or more can feed AI image recognition models processed on site. Meanwhile, smaller networks of video feeds in remote locations can be handled by Jetson Nano, linking with EGX and NVIDIA AI in the cloud.

Store aisles can be monitored by fully autonomous and capable conversational AI robots powered by Jetson AGX Xavier and running NVIDIA Isaac for SLAM navigation.

Whatever the application, GPUs at the edge provide a powerful combination for intelligent video analytics and machine learning applications.

With edge AI, telecommunications companies can develop next-generation services to offer their customers, providing new revenue streams.

Using EGX, telecom providers can analyze video camera feeds using image recognition models to help with everything from foot traffic to monitoring store shelves and deliveries.

For example, if a 7-Eleven ran out of donuts early in the morning on a Saturday in its store display, the convenience store manager could receive an alert that it needs restocking.

Edge Computing for Cities

Fortune 500 companies and startups alike are adopting AI at the edge for municipalities. For example, cities are developing AI applications to relieve traffic jams and increase safety.

Verizon uses Metropolis, the IoT application framework that, combined with Jetson’s deep learning capabilities, can analyze multiple streams of video data to look for ways to improve traffic flow, enhance pedestrian safety, optimize parking in urban areas, and more.

Ontario, Canada-based startup Miovision Technologies uses deep neural networks to analyze data from its own cameras and from city infrastructure to optimize traffic lights and keep vehicles moving.

Miovision and others’ work in this space can be accelerated by edge computing from the NVIDIA Jetson compact supercomputing module and insights from NVIDIA Metropolis. The energy-efficient Jetson can handle multiple video feeds simultaneously for AI processes. The combination delivers an alternative to network bottlenecks and traffic jams.

Edge computing scales up, too. Industry application frameworks like Metropolis and AI applications from third parties run atop of the EGX platform for optimal performance.

Edge Computing for Automakers and Manufacturers

Factories, retailers, manufacturers, and automakers are generating sensor data that can be used in a cross-referenced fashion to improve services.

This sensor fusion will enable retailers to deliver new services. Robots can use more than just voice and natural language processing models for conversational interactions. Those same bots can use video feeds to run on pose estimation models. Linking the voice and gesture sensor information can help robots better understand what products or directions customers are seeking.

Sensor fusion could create new user experiences for automakers to adopt for competitive advantages as well. Automakers could use pose estimation models to understand where a driver is looking along with natural language models that understand a request that correlates to restaurant locations on a car’s GPS map.

Edge Computing for Gaming

Gamers are notorious for demanding high-performance, low-latency computing power. High-quality cloud gaming at the edge ups the ante. Next-generation gaming applications involving virtual reality, augmented reality and AI are an even bigger challenge.

Telecommunications providers are using RTX Servers — which deliver cinematic-quality graphics enhanced by ray tracing and AI — to gamers around the world. These servers power GeForce NOW cloud gaming service, which transforms underpowered or incompatible hardware into powerful GeForce gaming PCs at the edge.

Taiwan Mobile, Korea’s LG U+, Japan’s SoftBank, and Russia’s Rostelecom have all announced plans to roll out the service to their cloud gaming customers.

The Future of Edge Computing

According to market research firm IDC, the edge computing market will be worth $34 billion by 2023. The emergence of 5G will enable the transition from computing at centralized data centers to computing at the edge, unlocking potential opportunities that were not previously available.

From video analytics to autonomous vehicles to gaming, edge computing is creating more possibilities to deliver immersive, real-time experiences that have low-latency and connectivity requirements.

Scott MartinScott Martin joined NVIDIA in 2018. He was previously an editor at The Wall Street Journal, USA Today, Red Herring and CNET
 -
5G Antenna Technology

How 4G and 5G Antennas Really Work

A parable – If antennas were orchestral instruments, it would be much easier to understand how they do what they do.

Picture yourself in an open space – a meadow if you will – on a quiet sunny day. In front of you, 30 meters away, is a full opera. They are singing the Canadian national anthem. Their singing is crisp and clear, just as it should be when you are dead center in front of opera singers.

Then, you start moving to the right. You are following the path of a semicircle, centered at the platform where opera singers are standing, with a radius of 30 meters. As you are moving along the semi-circle, the singing becomes quieter. This is normal – you are moving away from the center, and the sound does not reflect from a nearby wall or ceiling since you’re in an open space.

When you reach the end of the semi-circle, you’re in line with the opera singers, but still 30 meters away. The sound is quiet now. This is how a 4G antenna radiates on the horizontal plane. Most of the RF signal is delivered dead center in front of an antenna panel, then gradually becomes lower until it reaches its lowest point when you are lined up with the antenna.

Flying High with 4G

Now picture yourself and the opera levitating 30 meters above the ground. You can still move, but this time only above and below the singers. You are still moving around a semi-circle with a radius of 30 meters, but the semi-circle is now vertical. Again, you will hear the Canadian anthem loud and clear when in front of the singers, but as you move above the sound is gradually going to get quieter. Finally, when you are levitating 30 meters above the singers, you will not be able to hear much at all. This is how a 4G antenna radiates on the vertical plane. Again, most of the signal radiation is dead center in front of the antenna panel, then it gradually loses intensity as you move away, above or below the antenna.

Getting Louder with 5G

Now, let us give each opera singer a bullhorn. The sound is much stronger now. Imagine you are still moving around in a horizontal or vertical semi-circle around the opera. This time, the singers are pointing their bullhorns in your direction as you move. Because the sound is following you, it stays as loud as it was when you were dead center in front of the opera, no matter where you are.

This is how 5G antennas work. The bullhorn is 5G beamforming, and opera singers moving their heads is dynamic beamforming that tracks the user as they move away from the center of a 5G antenna panel.

How 5G Can Serve Multiple Users

We have just explained how 5G works with a single user exchanging data with the base station. The opera is a 5G base station, the listener is UE, and the Canadian national anthem is the data exchanged between the base station and the UE.

But what happens when there is more than one UE? Let us assume there are two listeners. They are both located on that horizontal semi-circle. One is a bit to the left, and the other is a bit to the right of the opera. Both are still 30 meters away. The one to the left wants to hear the Canadian National Anthem, and the one to the right wants to hear the American National Anthem. This can be done in 3 different ways:

All singers, still carrying bullhorns, turn to the left and sing the first verse of the Canadian anthem toward the first listener. Then, when they finish the first verse, the singers turn to the other listener and sing the first verse of the American national anthem. The listeners record their respective anthems on their smartphones while the opera sings in their direction and hit the pause button when they want to stop listening.

The total duration of the performance is double the time it takes to listen to each anthem. Increasing the number of listeners from one to two slowed down the exchange of information. This is how 5G analog beamforming works. While the data is exchanged with one UE, all other UEs are in the “pause” mode.

Now, let us assume the low register singers (bass, baritone) turn left and sing the Canadian national anthem, while the high register singers turn right to sing the American national anthem. When they are done, the low register singers turn to the right and sing the American national anthem, while the other half turn to the left and sing the Canadian national anthem.

Each listener records both the low and high versions on their smartphone, and then use an app to parse the low and high register together. Again, it took twice then what it would take if only one listener was present.

This is how digital 5G beamforming with a beam frequency reuse factor of 2 works. Each beam uses only half of the frequency bandwidth at a time, either the low or high band.

Next, the number of opera singers is doubled. Now we have two operas in one location, so it is a bit crowded on stage. Everybody has a bullhorn. The original opera members turn to the left and sing the Canadian national anthem, while the cloned opera members turn to the right and sing the American national anthem. Because they sing in the full register at the same time, both anthems can be sung simultaneously. This is how digital beamforming with a beam frequency reuse factor of 1 works. Each beam uses full frequency bandwidth to deliver data to the user.

Safe Distances for 4G and 5G

Now let us go back to the original setup. One listener is 30 meters away, in front of the opera. No bullhorns. Let us suppose the listener starts moving closer, still dead center in front of the opera. The sound gets louder and louder. At some point the sound becomes too loud, and their ears start hurting. The user backs away until the hurting stops. Let us say that is five meters away from the opera. This is the minimum safe distance to listen to the opera without damaging your eardrums.

Mobile wireless networks work the same way. A governing body determines the maximum electric field intensity in front of the antenna. Engineers can calculate the safe distance from the antenna using the maximum electric field intensity. Coming closer than the calculated safe distance may cause harm to your body. The actual safe distance depends on many factors, and we will cover that in detail in one of our upcoming webinars.

Let us get back to the scenario at hand. We just mentioned that the safe distance is five meters while standing dead center in front of the opera. This is a 4G case, and we learned from that case that moving away from the dead center decreases the intensity of the sound. If we are five meters away but are all the way to the side, we hear much less.

In that case, even if we come closer than five meters, our ears will not hurt. That is why it is safe to stand directly below a 4G panel antenna, even if the distance between you and the panel is less than the recommended minimum safe distance.

Lastly, let us look at the safe distance for 5G cases. This time, opera singers have bullhorns, and the sound is louder. Now, our ears start hurting 10 meters away from the singers. Not only is the safe distance larger, but it does also not change with the position of the listener, because the opera singers follow the listener as they move around. Thus, the safe distance is 10 meters in any direction relative to the center of the 5G antenna panel, including directly above and below the antenna.

This is how it works in the RF world in principle, although in reality, the safe distance does vary a little with user equipment (UE) position relative to the panel.

Vladan Jevremovic, PhDVladan Jevremovic, PhD, is the Director of Engineering, iBwave Solutions. He joined iBwave in 2009 as Director of Engineering Solutions and has been in the Telecommunication industry for more than 17 years. He is responsible for developing custom solutions as a part of professional services portfolio. He’s also responsible for ideation and requirements specification in the new product development life cycle and works closely with the development team on new product implementation. Vladan received his Diploma Ing. from University of Belgrade in Serbia, and his Masters and PhD from University of Colorado at Boulder.
 -
5G Environment

It’s About Time for 5G – And About Giving Radios Precise Local Clock Sources Even in Harsh Environments

Redefining best practices for implementing 5G timing and synchronization solutions.

There are a number of technological hurdles the industry faces while preparing for 5G. One of the most challenging is providing a network timing source that is accurate, stable, and reliable enough to support greater amounts of processing, at faster rates, over a larger number of tighter channels than was possible with prior 4G.

With densification of 10 to 20 times more radios than with 4G, the coming generation of 5G networks will have a much smaller latency budget between radios. Plus, the higher timing precision of 5G networks must be achieved even as this much larger number of radios, in less expensive housings, and with less thermal and mechanical protection, must be pushed to locations with significantly lower environmental controls. This includes telephone poles and lampposts beside busy highways where they will be subjected to heat, vibration, and rapid temperature shifts.

These and other 5G deployment challenges are being solved with the latest MEMS timing architectures that provide an alternative to earlier quartz crystal-based oven-controlled oscillator (OCXO) technology that had previously been used to deliver an accurate timing source.

MEMS OCXOs overcome the limitations of quartz OCXOs while delivering new capabilities that will help usher in a new set of best practices for deploying 5G infrastructure in the harsh environments.

Tighter Timing in Harsher Environments

As mobile operators move into 5G and edge computing, they require much tighter time synchronization in the radio equipment, which has necessitated the use of OCXOs. Before 5G, OCXOs were deployed in a well-controlled environment. Now, the computing, core network, and radio will be collapsed into a 5G system that may be deployed in an uncontrolled environment such as a tower, rooftop, and lamp post.

The OCXOs will be exposed to vibration and temperature extremes in this environment, without the benefit of the thermal and mechanical protection that was provided with earlier 4G radio housings. This requires an evaluation of the benefits of MEMS and quartz timing technologies for implementing the critical functionality of a locally derived timing clock.

The importance of this local timing source cannot be understated. It is one of three sources of timing in a 5G system that also includes the network itself and the backup GNSS source that provides a pulse per second when the network goes down (see Figure 1). When this happens, the local timing source must act as a holdover clock and keep going until the primary source(s) of timing returns.

It behaves like a flywheel that keeps spinning at a constant speed even when it is not being actively driven. There can be no drift or temperature-induced frequency changes, and no “activity dips” or sudden frequency jumps. The holdover clock source must be extremely stable so that the network synchronizer that selects between the three sources can perform “hidden” switching with no disruption in the signal phase of the outgoing clock.

Figure 1. One of three timing sources is selected by the network synchronizer, with no phase jumps during switching.

The problem with quartz-based OCXOs in this critical 5G holdover role is that they are extremely sensitive to environmental stressors including shock, vibration, heat, and rapid temperature shifts. Each of these stressors can disrupt the ability of a quartz-based OCXO to deliver a stable timing source. The lack of a stable timing source degrades network performance, reduces uptime, and impacts mission-critical services such as advanced driver assistance systems (ADAS).

Shock and vibration can be particularly problematic. Vibration can cause quartz oscillators to easily go out of specification, potentially for as long as the vibration continues. This span of time can be minutes for a passing freight train or even longer if, for instance, the oscillator is subjected to steady gusts on a windy day.

Temperature also presents challenges. Depending on the season of the year and where the oscillator is deployed, it can be exposed to extremely hot or cold conditions that can last for prolonged periods of time. See Figure 2 to see how temperature can affect frequency stability in quartz-based oscillators.

Also challenging are rapid temperature shifts, such as when a black box in the sun cools quickly as a rain cloud passes by, or in areas where colliding weather fronts and a moving jet stream bring together hot and cold air masses that can whipsaw ambient temperature from one extreme to another in a matter of minutes. Quartz oscillators have difficulty dealing with these effects, which can lead to frequency changes of hundreds of parts per billion (ppb). In many cases, it may take several minutes for the quartz oscillators to return to the specified frequency due to the slow oven-control time constants.

None of this is satisfactory in the 5G environment, where the latency budget of the network behind the radios is now 5 to 10 ns, and the maximum time difference between radios is limited to 130 ns. To solve these problems, MEMS timing solutions use a combination of programmable analog, innovative packaging, and high-performance temperature-compensation algorithms that deliver 20 times better timing precision than is possible with quartz-based alternatives.

The ability of these MEMS OCXOs to maintain sub-ppb frequency stability under challenging environmental stressors will have a transformative impact on 5G system deployment. The technology also gives developers an opportunity to substantially re-think their design strategies so they can take full advantage of the new capabilities that MEMS OCXOs deliver.

Figure 2. Comparison of measured frequency stability over temperature (hysteresis) of SiTime MEMS Stratum-3E OCXO and three quartz Stratum-3E OCXOs.

New 5G Best Practices with MEMS Oscillators

MEMS oscillators create a new set of best practices for deploying accurate network timing sources. Most importantly, they eliminate the need for developers to restrict their OCXO printed circuit board (PCB) placement options.

The sensitivity of quartz OCXOs to environmental stressors has required that they be separated from any sources of heat and airflow-induced thermal shock. These board placement constraints have complicated routing and created potential signal integrity problems. While developers have tried to solve this problem by using specialized plastic OCXO covers for thermal and airflow isolation, this introduces additional manufacturing steps and production complexity.

These concerns do not exist with MEMS OCXOs, which have 20 times the vibration immunity of quartz. They also have much better dynamic stability with a frequency slope vs. temperature (ΔF/ΔT) of ±50 ppt/° C typical (ppt = parts per trillion) and an Allan deviation (ADEV) of 2e-11 under airflow (see Figures 3A and 3B).

MEMS OCXOs eliminate the need to worry about protective components or mechanical shielding during board design, and on-chip regulators mean there is no need for external LDOs or ferrite beads. Additionally, MEMS oscillators are resistant to microphonic and/or board bending effects, which is a key consideration for large telecom PCBs. Without these placement constraints, designers will have significantly greater freedom to place the components based on other criteria such as less cross-coupling, reduced EMI, and higher density to save space.

Figure 3A. Comparison of measured frequency slope vs. temperature (ΔF/ΔT) of MEMS Stratum-3E OCXO and three quartz Stratum-3E OCXOs.

Figure 3B. MEMS Stratum-3E OCXO showing an Allan deviation (ADEV) of 2e-11 under airflow.

Concerns about heat and rapid temperature shifts are also removed with MEMS oscillators. Developers using the higher-performance MEMS OCXOs should assume that their local timing source will operate cleanly up to 125o C with very tight stability.

MEMS OCXOs will also maintain frequency within specifications even if ambient temperature changes by as much as 20o C within minutes. The timing source will not suffer any environmentally-induced fast frequency changes that can lead to dropped connections. It will be possible to give operators confidence that they can deploy 5G radios wherever they are needed.

The programmability of MEMS timing also redefines 5G design best practices. MEMS OCXOs expand the choices that developers have with regards to frequencies, output types, operating temperature, in-system control, and other features.

For example, developers can now choose the optimal frequency for the application, from 1 to 220 MHz and anywhere in between. They also can specify output types such as LVCMOS and clipped sine-wave to optimize board performance. Other options include extended temperature operation from -40 to +95o C and -40 to +105o C, I2C serial interface for in-system programmability, and digital controlled oscillator mode instead of a traditional analog voltage-controlled oscillator (VCO).

These choices are not possible with quartz OCXOs, which are custom built from the ground up, have severe limitations on the capabilities that can be specified, and are difficult to procure and use. In contrast, MEMS OCXOs come in a variety of standard footprint choices and are available as drop-in replacements for legacy OCXOs while improving overall comparative system performance and robustness. Another advantage is faster startup time to the desired frequency – MEMS OCXOs get there in milliseconds while analog quartz-based OCXOs can take minutes.

Developers of 5G network equipment face difficult challenges. They must establish and ensure a stable timing source for 10 times the volume of installed radio equipment than was the case with 4G networks. The connection to the core network will be via lower-grade switched networks, further increasing the requirement for reliable clocks in the radios. Plus, the stability of the timing source must be guaranteed in significantly harsher environments than where 4G radio equipment has been deployed, without benefit of the earlier radios’ more protective housings.

MEMS oscillators offer an alternative to legacy quartz-based OCXOs, which simply cannot meet these challenges. MEMS solutions deliver the stability, performance and immunity to shock, vibration, heat and rapid temperature shifts that are necessary for ensuring that 5G radios can be installed wherever necessary, regardless of environmental conditions. At the same time, these MEMS OCXOs redefine best practices for creating 5G systems and give developers significantly more design options than they had with legacy quartz-based OCXOs.

Markus LutzMarkus Lutz is the CTO and founder of SiTime Corporation. He is a prolific entrepreneur and inventor with proven ideation to implementation experience in bringing multiple first in kind MEMS technologies successfully to market. Received various awards and recognitions, achieved technological leadership with most efficient productization results transforming industries. He has a comprehensive knowledge in micro and nano technologies and is a MEMS expert in design, process, system architectures, and mass production. Lutz is a multi-disciplinary problem solving who holds over 100 patents. For more information, go to www.SiTime.com.
 -
5G

Market Disparities: Building A Strategy for Digital Edge Empowerment

Whether it is business, interpersonal communication, education, healthcare, precision agriculture, or otherwise, technology and connectivity continue to define and redefine our world.

Robust, reliable, and efficient access to the Internet — and to the content and resources it plays
host to — is key for maintaining a competitive edge and ensuring opportunities to thrive for local economic growth and overall quality of life.

Today, however, there are notable inconsistencies in the distribution of digital capabilities, affecting businesses and individuals everywhere. As technology continues to evolve rapidly, the digital divide grows larger, wreaking havoc on industries and individuals’ ability to learn, grow, and prosper across the United States.

As the pandemic continues its onslaught, shifting the way individuals and businesses interact and driving the world toward a more digital reality, the need for more robust and reliable communications infrastructure has been heightened. The demands on networks have grown and the requirements for distance learning, remote workforce enablement, telehealth, and beyond, have all grown exponentially — meaning that those without high-speed Internet access are put at a severe disadvantage.

As the gap in communications infrastructure broadens between metropolitan and rural or underserved markets, it is clear that the time to bridge this rift is now. The only question that remains is how to build a strategy that can overcome this challenge and keep these locations on track for stable, continued growth — in a way that makes sense for local business.

Understanding the Digital Divide

Since its debut, the Internet has continued to evolve, becoming an increasingly central facet of life. The Statista Research Department’s 2020 IoT Connected Devices Report forecasts that by 2030, the global number of connected devices will amount to 50 billion. Those devices will be used — and are being used today — to access online banking, distance learning, and remote work, to host virtual appointments with doctors, to pay bills, contact emergency services, manage agricultural crops, and more. It is hard to ignore the fundamental importance of connectivity and the key role that access to digital capabilities plays in overall success. Continued digital transformation is accelerating this dependence on technology, making equal, efficient, and robust access even more important.

The pandemic heightened the reliance on digital infrastructure due to the implementation of social distancing. This drove educational institutions to implement remote learning solutions — some for the first time in their histories — while major corporations have extended Work From Home (WFH) policies into the year 2021. These online solutions require trust that individuals can gain access to the files and perform work tasks over public and private connections. Meanwhile, healthcare workers, still faced with frontline pandemic responses, are adjusting their practices to support telehealth solutions, diagnosing, and treating patients from virtually anywhere. Traditional businesses from restaurants to retail have all pivoted, driving more sales online with no-touch service capabilities, ensuring the safety and welfare of everyone as we keep our economy running.

Unfortunately, while demand for online capabilities has become more universal, the natural spread of underlying technology and infrastructure that supports this access has grown more skewed toward central hubs. While the infrastructural support for metros has been a natural development as a result of increasing demand by a more consolidated population, it comes at a cost. That cost is rural, underserved, and lower-income communities growing increasingly left behind, despite the fact that their demand is just as core to their lives as it is for those in more central business destinations.

One indication of this systemic issue is that, as of early 2019, Pew Research Center reported that 26 percent of adults living in households earning less than $30,000 a year are “smartphone-dependent” Internet users. This means that they own a smartphone but do not have broadband Internet at home, and as a result, they employ their smartphone for traditional online tasks. In an era of social distancing and quarantine, when 53 percent of Americans are reporting the Internet as essential, being unable to access these online tasks reliably or efficiently represents a critical issue.

With the need for ubiquitous digital infrastructure, the level of latency and performance that is now required by adjacent, rural and underserved markets for streaming, mobile demands, and content consumption — a level that is on-par with major markets — is still going largely unconsidered. To address this disparity, many markets still have their local content and applications backhauled to major market hubs. This negatively impacts performance, increases costs and drives end users’ frustrations higher.

Why is the Gap Growing?

The digital divide continues to grow due to a number of factors. To start, large cities and metropolitan areas have high population densities — they are where the most customers are, where businesses reside and do much of their work, and where most infrastructure providers assume the return on investment is the highest. This means that infrastructure providers often see diminished incentives for deploying in these areas and fear they will not be able to justify the costs for building the necessary foundations. When the initial Internet infrastructure was developed, it focused on these core regions to get the most people connected. With the U.S. population continuing to be more dispersed, these major markets remain the most populated, but the adjacent and rural markets now have populations that rival those of the first Internet-connected locations.

...understanding how that innovation occurs and creating a method that works alongside existing market entities to ease any reservations is key.

Still, challenges do not just arise from factors external to the rural market, they also come from the markets themselves. In more remote and underserved areas, it is not uncommon for existing businesses to be resistant to new market entrants. Innovation often looks like disruption, which can create fear that businesses in the area will not be able to pivot or will be outpaced by new developments. While understanding the value of enhanced digital capability is not the issue, understanding how that innovation occurs and creating a method that works alongside existing market entities to ease any reservations is key.

In order to continue driving digital transformation, traditional transport solutions that rely on major markets must evolve to support a more robust and decentralized IT architecture, meeting the evolving content and application use of a highly distributed user base.

Creating a New Model

With today’s technology clearly requiring a more distributed model to the edge, attention on bridging the digital divide is growing. Solutions are being developed, but this challenge needs more work (and financial resources) to make up for lost time. Furthermore, the approach to addressing these needs in rural and underserved markets cannot be the same approach that has been taken in metropolitan locations — this is a different use case altogether that requires an individualized approach, building the right infrastructure with the right strategy to cultivate long-term growth and success.

To solve content and application latency, efficiency, cost, performance and access challenges, local content and applications need to be kept local. This means that a neutral approach to aggregating networks and driving interconnection at a single strategic location is needed. In metro-adjacent, rural or currently underserved locations especially, access to large data streams must be provisioned in a way that empowers markets through a more widespread distribution model designed to build trust while maintaining critical density for cost and performance efficiency. This model of interconnecting networks to enhance quality and performance is not new — it is just not yet happening at scale in a way that is made for the rural and remote areas where it is needed most.

These new market interconnection points require high levels of flexibility to overcome any deployment challenges — they must be able to be built in a host of different types of locations that suit what is available or what is needed in each market, remaining neutral in every way. They must be designed specifically for local compatibility, remaining free to leverage any real estate type or equipment while enabling any carrier, cloud, or content provider to be empowered by reaching the most endpoints through a robust interconnection strategy. At the core of this model is cooperation. Cooperation with, and between, local entities when building out this infrastructure means the existing businesses and providers are supported, not disrupted, which is key for ensuring full adoption and enduring success in these areas.

Not only will these points keep content and application traffic local (and offer the associated speed, cost reliability and performance benefits), they will create a symbiotic ecosystem for local businesses that goes beyond aggregating existing providers to attract a growing amount of content and applications as the edge point progresses. If cultivated correctly, these interconnection points will only continue to attract more providers and create a host of benefits not only for themselves but for the wider digital ecosystem, creating self-sufficient, ongoing growth that will level the digital playing field while creating a more robust foundation for the needs of today and tomorrow.

Scott WillisScott Willis serves as DartPoints President and Chief Executive Officer and also serves as a member of the Board of Directors. He is a recognized global technology leader in the communications industry with a demonstrated track record of building successful businesses for both large and small organizations to significant scale.

He has extensive leadership experience transforming organizations, setting strategic direction, overseeing complex operations, and confecting corporate alliances while delivering growth and profitability to the business.
 -
5G

Wireless Traffic Forecasts: 5G Will Make Little Difference to Long-Term Trends

Analysys Mason’s Wireless network data traffic: worldwide trends and forecasts 2020–2025 is the first traffic forecast that we have published since the launch of 5G. We still expect that 5G will change the general direction of traffic growth. Some of the other high-level findings from the report are as follows:

  • Mobile data traffic will grow, in a broadly linear manner, worldwide between 2019 and 2025, by a factor of 5.5.
  • 5G traffic will not dominate as quickly as 4G traffic did, but it will overtake 4G traffic in 2025 (if fixed-wireless access (FWA) is included). 5G handset traffic will be similar to 4G handset traffic in 2025.
  • A growing proportion of handset traffic will use cellular networks. Cellular networks accounted for 39 percent of all handset traffic worldwide in 2019 (with huge variations between countries), but it will be close to 50 percent by 2025. This trend will be reversed in countries with rapidly expanding fixed broadband penetration.
  • The average cellular network usage by handsets worldwide will grow from 5.4 GB per month in December 2019 to 19.7 GB in December 2025. The average data usage by handsets on all networks worldwide will grow from 13.5 GB per month to 40.5 GB per month over the same period.
  • FWA will account for 13 percent of all cellular traffic by 2025. Of this, handsets and data-only devices will account for 80 percent and 7 percent, respectively.
  • The Wi-Fi share of the total IP access network traffic will increase from 53 percent in 2019 to 66 percent in 2025. The cellular share will rise from 12 percent to 18 percent over the same period.

Analysys Mason has long predicted that 5G will not bring about particularly profound changes to the general long-term trend in mobile traffic (where ‘mobile’ here excludes FWA); that is, we expect that the cellular traffic growth rate will decline and eventually converge with the overall IP traffic growth rate. The report summarised here explains this in more detail.

5G Is Likely to Lead to Only Short-Term Surges in Traffic

Coverage is still very limited in some markets in which 5G has been launched, but 5G does introduce a huge block of additional capacity to cellular networks. In the past, we have observed that mobile traffic volume is primarily a function of supply and pricing, and not of extrinsic demand: volumes rise quickly when supply is plentiful, and slowly when it is constrained. However, we have also seen that new capacity or generations of networks and new pricing (which often go hand-in-hand) have, over time, created increasingly weak surges in traffic. This effect was seen in South Korea (see Figure 1); the year-on-year traffic growth rate picked up after the 5G launch in April 2019 but has since fallen back to the levels seen prior to the roll-out.[1]

Figure 1Figure 1. Mobile data usage by generation and the total year-on-year mobile traffic growth, South Korea, January 2019–May 2020 Source: Analysys Mason from MSIT, 2020.

Cellular Data Usage Will Proportionately Start to Displace Wi-Fi Usage on Handsets, But Not on Any Other Devices

Increasingly common and inexpensive unlimited data contracts are dampening Wi-Fi usage on handsets in both public Wi-Fi spaces and, much more importantly, private Wi-Fi networks (home or office). The Wi-Fi share of handset data varies greatly between countries depending on mobile pricing and home broadband take-up, but it was 61 percent worldwide in 2019. We forecast that this will fall to 50 percent by 2025. This is a fairly slow decline; although unlimited data contracts stop disincentivizing the use of cellular networks, they do not actually incentivize their use. Wi-Fi will continue to be the dominant radio access technology in terms of the overall traffic (there is currently four times more Wi-Fi data traffic than cellular traffic) for two reasons: other, more bandwidth-demanding, wireless devices rely solely on Wi-Fi, and fixed gigabit broadband plus Wi-Fi6 should provide a superior indoor experience to 4G or 5G.

5G Illustrates the Perils of Overproduction

5G launches are showing some characteristics of a crisis of overproduction. Operators are caught between finding (or creating) high-yield use cases (often with more-complex value chains) to justify the investment and falling back on high-volume, low-yield ones.

Simple mobile handset usage is not going to change MNOs’ fortunes, as most acknowledge; hence their interest in novel (often B2B) use cases outside eMBB, particularly the idea of ‘permission-in’ network slices sold at, we must assume, highly differentiated rates that generate higher yields per gigabyte than end-user-pays best-efforts internet. It is too early to predict the impact of these new use cases on traffic, but of course the volume of traffic is not the critical factor in these cases. The development of these new use cases may be understood as a kind of price discrimination to make the revenue trend (which is normally flat) match the demand trend more closely; that is, to give moderate and broadly linear growth (see Figure 2).

Figure 2Figure 2. Capacity, demand, and revenue in a crisis of overproduction Source: Analysys Mason, 2020.

If cellular traffic volumes show that consumers are fundamentally underwhelmed by 5G and find little to do on 5G that they could not already do on 4G, then we expect that some MNOs will use FWA to monetise their investments in spectrum and the newly expanded, yet empty airwaves. A specific set of conditions is required for FWA to thrive; namely, poor coverage and weak competition from gigabit-capable fixed broadband. Where these conditions are not in place, there is no fixed-to-FWA substitution. The opportunity for FWA may be slipping away in countries in which there has been significant investment in fibre, and is almost non-existent in super-advanced telecoms economies such as China and South Korea. Nevertheless, we expect that the adoption of the ultimate ‘pile-it-high-and-sell-it-cheap’ cellular service, FWA, will pick up in a few markets, most notably in the U.S. but also in Australia, Germany, and the UK. Indeed, we forecast that it will represent 13 percent of cellular network traffic worldwide by 2025.

Predicting that 5G traffic will catch up with 4G traffic by 2025 may appear bold. In fact, we do not expect that it will dominate quite as quickly as 4G did, but neither do we envisage that operators, having invested large sums in 5G, will allow the networks to lie fallow. The real problem for MNOs is whether these networks get filled with the right kind of traffic.

[1] COVID-19 muddies the waters because in this case it is impossible to tell whether the pandemic caused the stronger growth in early 2020 or the slower growth in mid-2020. Most MNOs saw an increase in cellular traffic during lockdown, but a significant minority saw a decrease.

Rupert WoodRupert Wood is the Research Director of Fibre Networks. He is the lead analyst for our Fibre Infrastructure and Wireless Infrastructure research programmes. His research covers the following areas: the evolution of operators’ investment priorities; operator business structures; business models for FTTP and convergence; fixed broadband technologies; the economic impact of digital transformation; capex forecasting; and network traffic forecasting. He has extensive experience of advising senior management on strategic issues. Rupert has a PhD from the University of Cambridge, where he was a Lecturer before joining Analysys Mason.
 -
Case Study

Dublin, Ohio, Embarks on Smart City Journey

IoT Pilot Leads the Way to a Connected Future

A smart city is not built in a day. The road to a smarter future involves careful planning, a strategic vision, and successful proof of concept deployments to validate technologies as well as partners. And a little luck of the Irish does not hurt, either.

Early this year, the city of Dublin, Ohio, embarked on a journey to create a future smart city designed to better serve its more than 47,000 residents and area businesses. The innovative “Connected Dublin” initiative aims to leverage smart mobility technology, IoT infrastructure, and high-speed fiber connectivity to enable economic development and improved the quality of life for this growing municipality located just northwest of Columbus, Ohio.

Back to the Future

In the first phase of this smart city initiative, Dublin’s civic leaders began their journey to the future by looking to the city’s past; namely, the Historic Dublin district. The decision was made to trial a smart parking application in the popular downtown historic district, blending next-generation network architecture with the city’s 19th-century architecture. Built on an IoT framework, the pilot is intended to measure and analyze parking patterns, making those insights visible to community leaders and residents in order to help improve local business success while reducing carbon emissions.

Real-time access to the parking datais available using a custom graphical user interface Next-generation network architecture blends with the city’s 19th-century architecture The pilot aims to improve local business success while reducing carbon emissions.

The city administration and Fujitsu Network Communications partnered to realize their vision of an application that could automatically observe the parking lot in order to determine how many vehicles are present and how many parking spots are available. From this data, the city hoped to glean the turnover rate for parking spots and peak parking times and analyze how parking patterns correspond to business foot traffic – all in an effort to help maximize utilization.

Facilitating support for the application required installation of cameras, data analytics algorithms and a local high-speed wireless broadband network. The team designed and deployed a single network solution that would not only enable the smart parking application, but also support multiple use cases in the future.

A Smarter Network

The network solution relies on an integrated digitization platform that meets the requirements for new 5G use cases with high-capacity wireless access over shared Citizens Broadband Radio Service (CBRS) spectrum and Ethernet backhaul. Software-defined control and orchestration allows seamless end-to-end service across wireless, wireline and virtual network resources with automated control.

To ensure availability of the latest data, the platform makes use of autonomous networking with an artificial intelligence (AI) driven video analytics application for real-time video stream processing. A virtual RAN core with user plane traffic separation at the edge provides local application access for distributed processing support. Edge compute nodes fueled by powerful GPUs are a critical component of the network architecture. By performing most of the computations at the edge of the network, these nodes limit the amount of data that needs to travel from the edge to the core of the network. This not only reduces latency in data transmission, but also facilitates the addition of new features via over-the-air software updates and enables high-bandwidth applications at lower operational costs.

Building for Tomorrow

The smart parking deployment is the first phase of a software-defined, 5G-ready platform for digital transformation. The platform is built on four main components: multi-layer software orchestration and control, physical and edge compute network equipment, IoT applications and devices, and complete lifecycle support services. These smart building blocks can be combined to meet the needs of various private or public 5G networks, and the IoT framework simplifies on-boarding of future use case applications, such as smart utilities, enhanced security surveillance or e-health services.

Unlike previous network generations, where applications would use the available services provided by the network, today’s intelligent network collaborates with applications to create network slices that best fit the requirements for each software application. Now a number of applications that were not economically feasible by themselves in the past, such as waste management or smart lighting, can each leverage a portion of the single IoT network. In this way, network slicing delivers real insights and efficiencies to benefit local businesses, municipal services, or utilities.

“The transformational power of 5G networking, combined with the benefits of IoT technologies, enables economic opportunity and improved quality of life for Dublin and cities of all sizes,” said Greg Manganello, head of the wireless and services business unit at Fujitsu Network Communications, Inc. “The smart parking IoT application with data analytics and machine learning provides the city of Dublin and Fujitsu with key learnings, new opportunities and future possibilities for smart city applications built on top of powerful private 5G networks.”

Moment in Time

“The smart parking application provides valuable real-time insights to support local businesses, bolster economic development and reduce drive times,” said Doug McCollough, chief information officer at the City of Dublin, Ohio. “By collaborating with partners to leverage advanced technology through the Connected Dublin initiative, we are exploring new ways to better serve our most important partners — the citizens of Dublin, Ohio.”

After nine months in service, the pilot application already is providing the city staff and local businesses with valuable insights into traffic and parking patterns in the thriving Historic Dublin district, according to McCollough. Collecting data at regular intervals reveals larger trends that can help the city administration make informed, data-driven decisions to improve operations. Moreover, access to the data through Open APIs will allow higher value applications to be built using the data from lower level software. This data then can be correlated with city service and business demands to understand optimal usage patterns, allowing the optimization of infrastructure prioritization and spending.

In fact, real-time access to the parking data using a custom graphical user interface (GUI) has yielded unexpected benefits. Statistics collected on the number of cars and duration of visits has helped the city staff monitor compliance with “Stay at Home” orders to measure the results of COVID-19 quarantine guidelines.

Intelligent Innovation

Now that the city of Dublin has access to a high-speed, wireless network, many more smart applications can be enabled with minimal expense. And, as 5G becomes a reality, more cities like Dublin will be able to combine intelligent networking, interconnectivity and IoT technology to enable new, modern services for improved overall quality of life and economic growth. As these smart cities of tomorrow increase productivity, efficiency, and cost savings, they will become more resilient and better prepared to respond quickly to unexpected challenges, even in today’s uncertain times.

Kai MaoKai Mao is a Distinguished Strategic Solutions Planner at Fujitsu Network Communications Inc., where he is responsible for developing Fujitsu’s long-term 5G portfolio strategy. Prior to his current role, Kai held leadership positions in product marketing, product management and software engineering.

Before joining Fujitsu, Kai was a Member of Scientific Staff at Bell Northern Research. He holds a Bachelor of Science in Molecular Genetics and a Bachelor of Engineering in Electrical Engineering from Western University in London, Ontario.
 -
Case Study mmWave

Winning in 5G with Rapid Characterization of Evolving Antenna Designs

Keysight Case Study Success Stories: 5G mmWave Phased-Array Antenna Testing

In multiple product categories, the race is on to be first to market with 5G devices. Ultimately, the 5G future will include new experiences enabled by ultra-high data rates and reliability, and ultra-low latency and energy requirements. These goals depend on advanced phased-array antennas capable of implementing innovative technologies such as massive multiple-input/multiple-output (MIMO) and beamforming. To compound the challenge, the antenna arrays will handle digitally modulated signals operating at millimeter-wave (mmWave) frequencies. Collectively, these changes have major implications for the process of designing and testing antenna arrays.

Developers at a U.S. manufacturer of components for aerospace and defense applications faced this situation. The company aimed to be first to market with a 5G antenna providing 1 GHz of bandwidth in specific mmWave bands of the 5G frequency allocations. Achieving this goal required a major change in the manufacturer’s antenna testing process, including a shift to over-the-air (OTA) characterization using signal generation and signal analysis at mmWave frequencies. Keysight provided the tools the design engineers needed to characterize the digitally modulated signals that mmWave phased arrays generate over the air.

The Challenge: Characterizing Array Performance

The engineering team of a U.S. manufacturer of components for the aerospace and defense industry leveraged its expertise to begin developing a 5G antenna with 1 GHz bandwidth in the mmWave bands. One crucial goal was to enable fine-tuning of antenna performance through rapid design changes to address the specific needs of customers developing 5G devices. Device makers need new antennas designs that provide reliable and high-speed connections. The team needed a way to fully characterize the transmit and receive paths to understand and prove the performance of each design variation. OTA testing techniques were necessary because the antennas operate at mmWave frequencies.

Prior to the 5G program, the development team validated antennas in a test chamber using a vector network analyzer (VNA). However, the signal generator in the VNA was not capable of producing 5G New Radio (NR) waveforms carrying digital modulation. Applying realistic 5G NR signals is essential to fully characterizing antenna and array performance.

The Solution: Adapting and Applying a 5G Testbed

The local Keysight team introduced the company’s developers to the Keysight 5G waveform generation and analysis testbed (see Figure 1). This is a reference solution that can meet a wide range of test requirements, including 5G NR (3GPP), pre-5G (5G Technical Form), and custom orthogonal frequency-division multiple access (OFDMA) waveforms.

Figure 1. This configuration of the 5G testbed supports 3GPP NR signal creation up to 44 GHz (left) and includes benchtop spectrum analysis up to 50 GHz (right), both with integrated 1 GHz bandwidth

Simulation and verification through real-time beamforming and beam tracking were essential capabilities for this use case to simulate real-world environments. RF channel in-phase/quadrature (I/Q) constellation, error vector magnitude (EVM), antenna pattern, and beam width were crucial measurements. EVM is an industry standard for signal quality used to measure the performance of an RF signal. More stringent specifications for RF performance increase the importance of EVM measurements, particularly in R&D and design validation.

The solution includes Keysight hardware and software elements for signal generation and signal analysis. The system uses a Keysight M9383A PXIe microwave signal generator and Keysight Signal Studio signal-creation software to produce 5G NR signals. The M9383A provides 1 GHz bandwidth across a frequency range of 1 MHz to 44 GHz. Developers download 5G NR signals created in Signal Studio to the M9383A. During testing, the M9383A connects directly to the antenna array under test, and the resulting signal is beamed at the signal analyzer.

For signal analysis, the solution includes a Keysight N9040B UXA signal analyzer and Keysight 89600 VSA software. An antenna connected to the UXA provides the input signal. The 89600 VSA software enables demodulation and detailed analysis of 5G NR signals, including EVM (see Figure 2), which is the key figure of merit for measurement quality. Different views make debugging easier accelerating development time to be first to market.

Side-by-side multi-measurement display of 5G NR and LTE carriersFigure 2. In this side-by-side multi-measurement display, the 89600 VSA software shows demodulation of 5G NR and LTE carriers.

The Results: Rapidly Characterizing Revised Designs

Using the Keysight 5G testbed, the component manufacturer’s engineers were able to perform accurate and repeatable OTA testing using 5G NR signals. One exceptional result: the testbed can measure 1 percent EVM in the OTA configuration. This enables the company to show its customers the true performance of each 5G phased-array design. Additionally, the testbed enables the creation of real-time 5G signals and instantaneous measurements of the transmit/receive channel. With these capabilities, the component manufacturer achieves complete test coverage, meet 5G device makers’ need for more performance, and can make rapid design changes in response to evolving requirements.

Going Forward

The future success of 5G depends on speed, whether it is in the creation of devices, the deployment of networks, or the performance of those devices and networks. In the run-up to 5G, the faster component manufacturers respond to multiple unique requirements, the faster their customers launch devices. Keysight’s solutions equipped this component manufacturer to achieve this goal. For example, the company introduced a development tool to help its customers accelerate their own projects.

 - Image Source: www.seeclearfield.com
Image Source: www.seeclearfield.com
5G

Simplifying Fiber Deployments for 5G

One of the biggest topics of discussion regarding 5G is the need for fiber. Due to latency and bandwidth demands, fiber must be further out in the network. Depending on the equipment and services, the number of fibers per 5G site varies. But some fiber must be at each site. Getting large amounts of fiber out to these sites is a daunting task that can involve permitting, road construction, coordination, and labor. Once the 5G fiber is deployed, fiber cables must then attach to radios, providing the 5G services. Simplifying this process reduces installation, as well as restoration, time in your 5G deployments.

Radically Simplify the Overall Project

Bringing fiber to anywhere, with the ability to satisfy the unique deployment needs of the service provider, engineer and network designer is getting easier. Modern simplicity in design delivers cost-effective, technician-friendly solutions. The best solutions combine the flexibility to deploy a product platform across a full range of fiber applications – including FTTX, cell backhaul, distributed antenna, node collapse or other premise and commercial environments. The idea is to get your wireless project going quickly.

Pole-mount small cell cabinetDesign simplicity gets your wireless project going quickly.

One of the best examples of a product platform that spans multiple fiber operating environments and facilitates quick deployment is the fiber cassette. The fiber cassette provides flexibility and reliable performance within the inside plant, outside plant, and access networks. All types of fiber cable construction can integrate within a fiber cassette to support a variety of patch-only, patch-and-splice, passive optical component hardware, and plug-and-play scenarios.

Reduce Radio Turn Up Time

Plug-and-play technology significantly speeds up a 5G fiber build. Instead of conventional splices, plug-and-play technology uses outside plant rated connections. This creates an opportunity for saving time not only during initial turn-up, but also when troubleshooting or operational moves, adds, and changes occur. With plug-and-play, instead of placing splice closures throughout the access network that require a trained splicing technician to open and splice, a better alternative is to place a terminal, which is essentially a sealed patch field that almost anyone can patch into, by using a terminated fiber drop cable.

An application we expect to see for 5G is the use of terminals—similar to how traditional FTTH gets deployed. In this case, the fiber tail of the terminal splices to a distribution cable passing through the serving area during the initial installation. To turn up a radio, a non-splicing technician can install an LC Duplex patch cable from the terminal to each radio. Installation may not have to occur during a service window because it is not necessary to open a splice case and manipulate fibers currently not in service. The terminal is a separate, safe location to connect fiber into the network without a chance of harming other critical circuits on the distribution fiber.

Limit Restoration Time

Storms, construction errors, and accidental damage will occur – both when deploying 5G fiber and after – and at all hours of the day or night. Limiting restoration time not only cuts costs but also reduces a customer’s frustration when service is down. Using craft-friendly products limits the need for highly trained technicians and gets customers up and running faster.

Recently introduced fiber cable-in-conduit type solutions have the same footprint as traditional flat drop fiber cable, with the added advantage of restorability. With a restorable one-pass fiber cable-in-conduit, a technician uses a kit to easily repair microduct after an accidental fiber cut. Then the technician installs a new, pushable assembly, minimizing costs and time to restore the service outage. This provides a completely protected pathway from the access point directly to the premise, business, or antenna.

Conclusion

The push to roll out 5G may have been slowed some by the COVID crisis but the fact remains that citizens within the community continue to demand ubiquitous high-speed broadband. The early days of deploying 5G are upon us. The technology is well designed, and fiber is improving. The deployment techniques available to select from in order to build these fiber networks just got even better.

Kevin MorganKevin leads the marketing efforts for Clearfield as Chief Marketing Officer and joined the company in 2016. He also currently serves as an officer of the Fiber Broadband Association’s Board and is a two-time elected Board Chair (2015, 2019) after first joining the Board in 2010.

Prior to joining Clearfield, he spent two decades serving in various senior marketing positions at ADTRAN, Inc. where he gained extensive experience in advanced communications technology, fiber optic systems, and business product marketing. Before that, he spent a decade at telephone operating company BellSouth where he worked as the lead product evaluation resource of broadband technologies in the Science and Technology department.
 -
5G

Quality Mobile Connectivity for Rural America

A Cost-effective Approach to Rapid Expansion of Mobile Broadband in Rural Areas

According to a 2019 report by Pew Research Center titled Digital Gap Between Rural and Nonrural America, over 60 percent of rural Americans surveyed say they connect at home using a broadband internet connection. While this is a significant improvement over the last 10 years, it is clear many rural Americans are still not connecting where they live and do not even own a smartphone. For these unconnected rural Americans, it is about more than connecting to high-speed internet at home, broadband is simply not available where they live.

State and local governments with constituents living in largely rural areas have been working to address the connectivity crisis, especially during the pandemic. However, the various initiatives, including providing mobile hotspot devices, have fallen way short of bridging the gap.

Across the U.S., 97 percent of the land area is considered rural, much of which remains without mobile broadband coverage, including roughly 11 percent of the nation’s road miles.

These initiatives only work if quality broadband internet infrastructure is installed to rural homes and establishments. To address this problem, these government entities have been pressuring Congress to act quickly and invest in the deployment of high-speed fixed broadband infrastructure in these areas.

Connectivity on mobile devices such as smartphones, tablets, and cellular-enabled laptops is insufficient in many of these rural areas because mobile broadband coverage is not available, or spotty at best.

Across the U.S., 97 percent of the land area is considered rural, much of which remains without mobile broadband coverage, including roughly 11 percent of the nation’s road miles. It simply has not been economically viable for mobile operators to deploy miles of terrestrial-backhauled networks into rural unpopulated or sparsely populated areas, much of which is mountainous terrain and dense forests.

Subsidizing the installation cost of telecommunications infrastructure, whether fixed or mobile, is vital to enabling service providers to profitably build out their networks in rural America. This is especially true when relying only on terrestrial backhaul solutions, such as fiber.

According to data from the U.S. Department of Commerce and National Telecommunications and Information Administration, the cost of fiber and conduit material alone for a 10-mile installation runs on average $186,000. This does not even consider trenching or other costs.

Government initiatives are underway to subsidize both fixed and mobile network buildouts in unconnected rural areas of America. For example, the Rural Digital Opportunity Fund was approved by the U.S. FCC to allocate $20 billion over the next 10 years to broadband providers, which ensures residents in rural areas have access to quality broadband internet connections.

In addition to fixed broadband funding, the FCC also approved what they dub the 5G Fund for Rural America, which provides $9 billion for the deployment of 5G mobile broadband in rural areas over a 10-year period (see the AGL article titled 5G Fund Proposed for Remote Rural America for more information).

However, timing is a major issue for these government initiatives aimed at closing the digital divide. The 5G Fund for Rural America auction is not slated to begin until 2021, and that plan is based on using the former Mobility Fund II map. An additional plan was proposed by the FCC that includes updating the coverage maps and further extending the auction date until 2023. Both plans were met with resistance from the Competitive Carriers Association (CCA), which represents rural operators, due to concerns about timing.

Both the network build-out and the auction need to happen in a timely manner to close the digital divide as soon as possible. Since it can take six months to a year or more to deploy mobile broadband networks in rural areas using terrestrial backhaul, the timing could extend to 2025 before many of these areas have access to coverage.

There is a viable solution that addresses the challenges of cost, timing, and the complexity associated with connecting unconnected areas of rural America: satellite backhaul. By utilizing quality satellite backhaul in place of terrestrial backhaul (or even as an interim solution), mobile operators, and even tower companies interested in new business models, can quickly and cost-effectively deploy 4G or 5G coverage in any place, and for any purpose regardless of how rural or remote the area.

While satellite backhaul alone, in the form of capacity, is suitable for larger mobile operators that have dedicated satellite teams in their organizations, most rural operators do not have this luxury. For rural operators and tower companies looking for ways to offer new services to mobile operators, a fully managed cellular backhaul service over satellite is ideal.

There are many advantages to using an end-to-end satellite managed service to backhaul cell sites in rural areas for 4G or 5G coverage. These include:

  • The ubiquitous nature of satellite for rapid deployment of mobile broadband coverage in any rural area, no matter how remote – backhaul in weeks instead of months
  • Advances in satellite technology that provide connectivity to a network of rural cell sites in a cost-efficient manner by dynamically distributing bandwidth based on per-site traffic demand
  • Technological advances, including forward error correction and acceleration, to ensure strict quality of service (QoS) requirements are met and fiber-like connectivity is delivered for optimal quality of experience (QoE)
  • Low-cost, very small satellite antennas that can be quickly installed, helping providers realize cost and time efficiencies
  • A variety of service plans and professional services that include access to a global space and terrestrial network, required satellite capacity and equipment, expert engineering services for network design and 24x7 support, last-mile connectivity solutions, and installation and maintenance

We must connect rural America. Time is of the essence. By incorporating a complete satellite backhaul managed service, we can quickly and cost-effectively expand 4G or 5G mobile broadband coverage across rural America.

Todd CottsTodd Cotts is a Senior Principal Product Marketing Manager at Intelsat with specific focus on the Mobile Network Operator (MNO) segment. In this role Todd promotes the value the Intelsat satellite fleet can bring to MNOs and promote greater integration between satellite and cellular services. In particular, Todd supports space-based solutions for quickly and economically expanding reliable 2G/3G/4G/5G connectivity everywhere.

Todd has nearly 20 years’ experience in the telecommunications industry, much of which was with a tier 1 operator, followed by several industry verticals in devices, Software-as-a-Service, and network testing.

At Intelsat, we turn possibilities into reality. Imagine here, with us, at [Intelsat.com](https://www.intelsat.com/)
 -
5G Artificial Intelligence Internet of Behaviors Security

Trends

As we are all aware, the pandemic has reordered the world. Trends that were once outside of the wireless space are now much closer, if not immersed (healthcare, for example). So, as we say goodbye to 2020, it is time to take a look at a much broader trend landscape.

Artificial Intelligence

What has surfaced as the most visible technology trend has been AI. Machine intelligence (MI) and machine learning (ML) are arguably the next set of transformative technologies in the tech sector, in part because of their broad applicability across diverse applications and use cases. And AI will be the great enabler for these platforms.

The pandemic has accelerated both the development and deployment of AI, especially in the areas of medicine and security. Additionally, as AI has accelerated, so have ML and machine intelligence MI. They are inextricably connected.

These three technologies will rapidly permeate nearly every industry, from agriculture to manufacturing, smart “X”, wireless, medicine, transportation, infrastructure, and more.

AIThere will be many trends under this umbrella, from intelligent robots that will do everything from housekeeping to replacing bank tellers. Autonomous vehicles of all types will be driven by AI, either entirely or assist. Using AI for data analysis (particularly big data) will become a super-trend.

AI will be especially prominent in 5G because 5G’s agility and technologies. Technologies such as dynamic spectrum sharing (DSS), intelligent power management, intelligent MIMO management and more cannot be handled by existing systems and platforms. Therefore, the trend will be for these three apostles, AI, MI, and ML to ramp up exponentially.

Cybersecurity Mesh

A new trend in security is moving the security blanket from the hard assets to soft assets using a flexible mesh security network. This is trending because the traditional security target, the organization, is no longer the primary asset because its perimeter has become mercurial. The new security objective is the user. This is the new trend in enterprise security.

Why this is evolving is that when the user is secure, the organization is secure. And, with more digital assets outside of the firewall, particularly with the cloud and the shift to remote workers, this puts the security perimeter around the individual as opposed to around the organization. The theory is that if the user can be secured, there are no portal into the organization’s data.

Anywhere Operations

This trend is a new paradigm along the lines of mesh security but focused on functionality. It is a model that supports customers anywhere and everywhere and allows management of business services across any distributed infrastructure.

Essentially this offers location independence and services at the point where they are required with a combination of cloud and edge services. This trend is showing up more and more, largely due to the pandemic’s movement restrictions and the work-at-home measure. This trend is likely to continue beyond any resolution of the pandemic.

Internet of Behaviors

Almost daily we are coming up with a new Internet of “X” nomenclature. There is the Internet of Your Things, the Internet of Medical Things, the Internet of People, the Internet of Education, and many more.

One of the more interesting emerging trends within the Internet of X is called the Internet of Behaviors. This is how organizations of all types are leveraging technology and data to monitor behavioral events and manage the data to upgrade or downgrade the experience to influence those behaviors.

What that means is not only is Big Brother watching you but so is private enterprise. And, far beyond what is currently on the front pages from social media. For example, insurance and health organizations are going to be monitoring your fitness bands and your food intake, the number of times you go to the gym, and more. The goal is to use such data to minimize risk, maximize opportunity, and optimize their operational profits. For example, the insurance industry will use the data to adjust your premiums.

SensorsSimilar tactics are currently used by banks, media – social and other organizations, retail, and more, to get a picture of your behavior. Then, using any number of analytics tools, adjust their presentation to more align with your lifestyle.

While this might sound invasive, these organizations claim that such activity is designed to offer you a more pleasant and focused experience. Look for this to only ramp up in the short term. But also look for guardian organizations to scrutinize it for nefarious or illegal usage.

5G

Much of the 5G world has been quietly churning away at getting the hardware and software deployed so there have not been many new trends lately.

However, one of the more visible emerging trends is private 5G networks. Private 5G will be one of the most popular verticals in the ecosystem.

Private networks is a domain that has been, largely, owned by Wi-Fi for smaller networks and 4G for larger ones. In fact, advanced 4G technologies are also seeing an uptick in private networks.

However, 4G networks suffer from a number of restraining conditions. 5G will be much less restrictive. As well, 5G will offer advanced features that will make private networks much faster, with higher capacity. They will be costly, at least at the beginning, but as they trend, the price point is sure to drop.

Sensors

Sensors may seem like dull and dry technology. However, the latest advances in sensors are about to kick off a rash of new trends.

For example, highly technical sensors are creating a vast medical implant sector. It becomes increasingly possible to monitor patients with tissue-implantable wireless-linked sensors, for example. This promises to improve the quality of life for many who suffer from a chronic medical condition, provide vital data in critical life-safety situations and monitor any number of environmental conditions (details about forest fires or dangerous weather situations, for example).

SensorsA typical application might be that a patient with diabetes could be monitored continuously. A small, implanted sensor can send blood glucose information to a base station via a wireless connection. If the glucose level moved outside a pre-programmed range, the base station would send an alarm to the appropriate caregiver. Managing pain and cardiac care are among many other possible applications as are blood and chemical monitoring.

Similar developments are occurring across other fields, such as autonomous vehicles, smart appliances, agriculture, and more. This rapid pace in the sensor segment will be seen across all ecosystems, eventually.

Ad Index

In This Issue  
 -
5G

From the Editor

Not too long ago this administration came up with the brilliant idea that the government should be t...
 -
Privacy

Keeping the IoT Smart and Secure

By the end of 2020, there will be 1.91 billion Internet of Things connections. Securing these connec...
 -
Artificial Intelligence

Beyond AI, Ambient Intelligence is on the Horizon

Ambient Intelligence (AmI) – one of those terms that, unless you are close to the topic, sounds like...
 -
Security

Vulnerabilities in GTP Threaten Mobile Operators and Subscribers

Earlier, Positive Technologies described how SS7, the most mature mobile roaming protocol, in terms ...
 -
Technology Fundamentals

What Is Edge Computing?

Edge computing and donuts have one thing in common: the closer they are to the consumer, the better....
 -
5G Antenna Technology

How 4G and 5G Antennas Really Work

Picture yourself in an open space – a meadow if you will – on a quiet sunny day. In front of you, 30...
 -
5G Environment

It’s About Time for 5G – And About Giving Radios Precise Local Clock Sources Even in Harsh Environments

There are a number of technological hurdles the industry faces while preparing for 5G. One of the mo...
 -
5G

Market Disparities: Building A Strategy for Digital Edge Empowerment

Robust, reliable, and efficient access to the Internet — and to the content and resources it plays h...
 -
5G

Wireless Traffic Forecasts: 5G Will Make Little Difference to Long-Term Trends

Analysys Mason’s Wireless network data traffic: worldwide trends and forecasts 2020–2025 is the firs...
 -
Case Study

Dublin, Ohio, Embarks on Smart City Journey

A smart city is not built in a day. The road to a smarter future involves careful planning, a strate...
 -
Case Study mmWave

Winning in 5G with Rapid Characterization of Evolving Antenna Designs

In multiple product categories, the race is on to be first to market with 5G devices. Ultimately, th...
 - Image Source: www.seeclearfield.com
5G

Simplifying Fiber Deployments for 5G

One of the biggest topics of discussion regarding 5G is the need for fiber. Due to latency and bandw...
 -
5G

Quality Mobile Connectivity for Rural America

According to a 2019 report by Pew Research Center titled Digital Gap Between Rural and Nonrural Amer...
 -
5G Artificial Intelligence Internet of Behaviors Security

Trends

As we are all aware, the pandemic has reordered the world. Trends that were once outside of the wire...