6G: The Spring-Summer 2024 Collection
During the past five weeks I attended six different conferences and workshops, some of which were entirely dedicated to 6G, such as the EuCNC and 6G summit, while the others were communication engineering events that have unavoidably touched upon 6G. The topics discussed below are far from comprehensive, but hopefully with a more objective perspective as compared to the ones that would have taken place without the exposure to these conferences and workshops.
Unsuccessful Generations and Successful Mobile Devices
When the mobile generations are discussed and compared in terms of their level of success, it is usually said that the odd generations (1G, 3G) have not been successful, ergo 5G will not be successful. This is then followed by a comforting observation about the anticipated success of 6G, not the least because 6 is an even number. These judgements, coming close to the domain of superstition, are particularly harsh to 3G, a technology that, in some cases, was out-phased even before its predecessor, 2G. However, what is rarely mentioned is that iPhone and hence the contemporary notion of smartphone, appeared in 2007. This was the period during which 3G dominated the mobile networks, while 4G had still a couple of years ahead before being deployed.
One view on this can be the unsuccessful generation gave rise to a successful device. The other view could be that the possibilities offered by 3G were creatively used by the innovators that, at that time, were not part of the mainstream wireless research and standardization. The development of 3G was burdened by the relentless search for the killer app; then 3G was deployed and the killer app turned to be the smartphone as well as the associated ecosystem for apps. In a mobile communication’s version of the Russel’s paradox, the set of apps was not part of the originally designated set of possible killer apps. The year 2007 happened also to be in the midst of the specification of the 4G mobile technology, such that data-hungry apps became the raison d’être for the development of high-speed mobile communication solutions. While 3G gave rise to a radically new innovation, 4G served as performance booster to that same innovation.
These observations lead naturally to the question: could the “unsuccessful” 5G give rise to a radically new concept of a mobile device in the coming years? I will try to speculate based on the three trends towards 6G: wireless sensing, Artificial Intelligence (AI) and low-latency edge computing. Recall that the iPhone took away the keyboard, which was central for the previous notions of a mobile phone that is smart. Wireless sensing may, at least partially, take away the need to type the input on the mobile device but instead “type” it in the air, while the powerful AI modules at the edge interpret the moves as inputs and act based on low-latency edge computing. Note that wireless sensing does not need to wait for 6G, it has already been demonstrated with Wi-Fi and 5G signals. Thus, the next generation of mobile devices may have part of its user interface embedded into the surrounding network. This is compatible with the extensive consideration of digital twins in the context of 6G, except that in this case the network will add augmentation to the physical version of the mobile device. The existence of such a mobile device, whose interfaces are partially outsourced to the network, will likely be boosted after 2030 by the deployment of 6G and its advanced capability for Joint Communication and Sensing (JCAS), also known as Integrated Communication and Sensing (ISAC), which is one of the core features of the 6G specification.
Fusing the Digital and Physical Worlds
The speculative mobile device from the text above can be seen as a gateway that fuses the physical and the digital world of the device owner. Regardless of whether such a device will appear in the coming years, the fact is that that 6G is seen as a system that goes beyond communications to offer sensing and localization. This will lead to increased fusion of digital and physical world, as also indicated in the keynote by Ericsson at IEEE ICC in Denver. Another keynote from Huawei at the same conference has reinforced the argument, by showing significant developments in integrated communication and sensing.
In the wake of this physical-digital fusion, what has been keeping me busy during the past years was that this may change the way we perceive time, ordering of events, and how it is acted upon them. The physical time has its order and arrow, such that its manipulation is still in the domain of science fiction. Nevertheless, the perception of time in digital interconnected systems is dependent on the way information is processed and transported. In the future this may bring the questions of time forensics, for instance, when an accident happens with autonomous vehicles the mobile network can act as an impartial witness that has the proper ordering and causality of events. This will put the future base stations as the primary guardians of the trustworthiness of the data that enters from the physical to the digital world.
Satellite Communication is Cool Again
Ever since Arthur C. Clarke conceived the idea of satellites are relays that support the communication between two distance points on the Earth, there has been fascination with these machines that literally live outside of this world. This fascination was, nevertheless, not reflected in the exponential evolution of wireless mobile communications since the 90s. The mobile communications were centered around terrestrial base stations and, in their world, satellite communications were rather marginalized. This was changed in the recent years, with the focus on Non-Terrestrial Networks (NTNs), where satellite communication plays the lead role. Already now some of the new smartphones offer a limited satellite connection. Beyond these initial efforts, the ambition is to create a global wireless infrastructure for the mobile devices. This infrastructure will consist of integration of a moving part, offered by the fast-moving Low Earth Orbit (LEO) satellites and other flying entities, and a fixed part, represented by the wireless networks deployed on the ground as well as the geostationary satellites. The existence of a device that is well connected to both satellites and terrestrial networks may be the “moment of arrival” for 6G. Indeed, the general public may not see a big change due to the use of FR3 spectrum or AI-native protocol modules, but almost everyone will understand when the smartphone can be connected to satellites whenever needed. Especially when this removes the offline time during the flights.
Beyond offering to connect the personal mobile devices anytime and anywhere (this time literally, not as advertised in 3G/4G), satellites set the basis for massive connectivity of Internet of Things (IoT) devices over wide areas. The middle of 2010s gave rise to the NB-IoT standard for connecting IoT devices to the mobile wireless infrastructure and massive IoT connectivity has been treated an integral component of 5G. The IoT traffic has often been considered to be dominated by uplink transmissions, that is why a bulk of research on massive IoT has been dealing with enabling efficient uncoordinated access for large number of devices. Nevertheless, we can hypothesize that the traffic patterns for the 6G IoT devices will become more symmetric. One reason is that, once the connectivity is in place, the ambitions may go beyond collecting data towards interacting with the remote IoT devices. For instance, the global satellite connection can offer closed-loop interaction with logistics containers or even the individual objects within them. Another reason for more symmetric traffic comes the distributed intelligence across the connected IoT devices that require bidirectional connections for learning and inference.
Sustainability
While undeniably important, considering the state and the resources of the Earth, sustainability has sometimes been used as a concept, or even a word, for virtue signaling while lacking substance in a given context. For example, making mobile communication systems energy efficient has always been an objective. Thus, if I state that I make the wireless system sustainable by making its modulation or access protocol to be more energy-efficient, I am basically serving the same old drink in a bottle with a new sticker. In that sense, it was refreshing to see the operator’s perspective on 6G provided by Orange at IEEE Communication Theory Workshop, advocating that the deployment on future network can also look into the use of raw materials and other resources for running those networks and devices. Some of the material is covered in the white paper produced by Orange in April 2024. Hence, it is not only about making the communication system more reliable or faster, but rather a holistic calculation on the overall environmental impact. Putting those performance measures can lead, for example, to a more economic use of AI, rather than throwing AI and the associated computation to every problem.
Communication Standards in the Age of AI
One question that came up multiple times during the EuCNC conference was how the communication standards will look like in the age of widespread use of AI, as well as the fact that AI is seen as a natively integrated part of 6G. In a grossly oversimplified description, a communication standard is a specification of interfaces, state machines, and the format/coding of the messages associated with the transitions within these state machines. We can continue to have human-specified interfaces, state machines, and messages, while using AI to efficiently implement the modules of the communication systems that respect the interfaces, the state transitions, and the messages' format. As a more advanced step, we can use AI to work along with humans and help to specify the interfaces, state machines, and messages, while also playing a role in the implementation of the communication modules. The common for these two uses of AI in communication protocols is that the standard remains static after the specification, that is, any adaptation and optimization offered by the AI algorithms is constrained to respect the standard that has been previously specified and fixed. In other words, the AI module of a communication system can gain experience by collecting data during the lifetime of the communication system and may come up with a better HARQ protocol, but the fixed standard would not allow for this potentially clever, but ad hoc modification.
A different option would be to define standards that embrace continuous learning at the devices and networks. This may lead to allowing for ad hoc changes of the protocols, that is, in the interfaces, state machines, and messages. A number of interesting approaches arise in connection to this possibility. For instance, an option is that there is a core protocol, with a pre-specified state machine to which the device and the network can always fall back to as a default mode of communication. On the other hand, the devices that have a history of interaction and common learning can rely on customized variants of the protocol. This is an instance of a semantic communication facilitated by learning, a topic with a inflation of attention in the recent academic research. As a cautionary note, opening the doors for protocols with continuous learning raises problems on security, protocol verification, and trustworthiness. Coupled with the role of the future base stations in the trustworthy coupling of the digital and physical world, the AI-driven communication standards will be critically dependent on explainability of the designs and algorithms.