When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

Latest Microsoft patent envisions 'toxicity shield' for multiuser environments

The issue of cyberbullying is something that companies have attempted to address in the past, with some even leveraging machine learning to identify instances of online bullying. In some cases, toxic virtual environments may act as a place where such incidents thrive. However, the definition of what one may consider 'toxic' varies from person to person. For example, an expletive that may cause someone to take offense may not be interpreted in the same manner by a different user.

Microsoft seems to have an interesting idea to combat controversial multiuser environments, with its latest patent envisioning a personalized 'toxicity shield' that would monitor communications and censor data according to user preference.

The communications could comprise of voice-based or text-based chats, such as those in multiplayer gaming sessions, or it could extend to graphical displays in video streams. Furthermore, the system could also be implemented to work as part of online web conferences or VR/AR sessions. This would essentially enable users to effectively communicate with multiple others without a particular person becoming a source of annoyance for them.

However, the content being considered by the toxicity shield modules would not only be limited to expletives, but may also include undesirable acts such as someone breathing heavily into the microphone, or someone standing a little too close in a virtual environment. A blanket ban over such particular actions will always feel overly harsh to some, which is why the described system would only censor it for the user feeling uncomfortable in such an environment, as can be observed in the picture above.

Moving on to the working of the toxicity shield, it may analyze 'tolerance data' to deploy a toxicity-shield module (TSM) particular to a user. These TSMs could also be based on prediction models generated through a machine learning engine that analyzes reactions to toxic behaviors. Default profiles may be chosen to provide the engine all the information it needs, while users may also set up settings manually for the same purpose. As such, the system would be able to respond accordingly - by muting, censoring, or not playing the unwanted data - in "substantially real-time". In some cases, it may also send warning messages to the perpetrator of the undesirable acts or pause all communications with them.

The existence of such a system, if executed smartly, may perhaps turn out to be the perfect solution to the presence of certain elements in an online, multiuser environment that may be unacceptable to some, but tolerable for others. However, it is important to note that patents usually take years to materialize, if they are ever generally released to the public, that is.

Report a problem with article
aws logo
Next Article

Kickstart your career in AWS Cloud Development with this Pay What You Want Bundle

Previous Article

LG's V50 ThinQ carrying Sprint's 5G branding has been leaked

Join the conversation!

Login or Sign Up to read and post a comment.

17 Comments - Add comment