Recently Browsing 0 members
No registered users viewing this page.
By Usama Jawad96
Chrome 93 is landing today, removes 3DES encryption support in TLS, adds WebOTP on desktop
by Usama Jawad
Chrome 92 made its way to the Stable channel back in July, deprecating payment handler configurations and enhancing Progressive Web Apps (PWAs), among many other things. Today, Chrome 93 will be landing for the public. Apart from the features and enhancements that it brings, this Chrome release is also significant because it's the last version in Google's regular release cadence. Starting with Chrome 94 next month, the company will shift to a four-week release cycle.
Starting with deprecations, Chrome 93 is removing support for the 3DES cipher block chaining (CBC) cipher suite in Transport Layer Security (TLS). Google has cited a number of reasons for this including the fact that newer and better AES-based replacements have been available for the past couple of decades. 3DES is also vulnerable to the Sweet32 attack, and is extremely slow, battery-consuming, and CPU-intensive, especially on mobile platforms. Furthermore, many of its implementations leak the cache and timing side channels. In the same vein, Google is also blocking connections to HTTP, HTTPS, and FTP servers on port 989 and 990. This is because of security reasons and is a mitigation to the ALPACA attack.
Google is further enhancing Chrome by allowing web apps to identify themselves as note-taking apps when needed, which will also allow for OS-level integrations. The Multi-Screen Window Placement API is being enhanced to cater to productivity use-cases where having Chrome utilized across multiple screens will result in a better experience. Support for the WebOTP API has been initiated for Chrome on desktop too, as it will allow developers to retrieve one-time passwords (OTP) sent via SMS in a specific format and sync them across Chrome on Android and desktop, provided that you are signed in to the same Google account. Given the increasing popularity of SVG images, the Clipboard API now supports this format too. Similarly, with browser vendors now adding playback speed controls in their offerings, Chrome is giving companies a way to enable and disable this control when playing media on their website.
Apart from all of the above, Google has a ton of developer-facing capabilities present in Chrome 93 too. A CSS property is being added to change the color of form controls, the "style" keyword is being added to the "contain" property again, support for CSS Module Scripts is being added, the AbortSignal.abort() static factory method is being introduced to make the life of developers easier, and the meta element's "media" attribute will now be honored so developers can shift between the theme colors of their website using a media query. In the same vein, the CSS flex box and flex items will obey the keywords for positional alignment put forward in the W3C document here.
Chrome 93 is expected to roll out later today. If it does not update to version 93 automatically for you throughout the course of the day, head over to Help > About Google Chrome to trigger the update once it becomes available. Next up is Chrome 94 which is currently in the Beta channel with a Stable release expected on September 21, which is in three weeks' time. With Chrome 94, Google will be moving to a four-week release cycle for subsequent Chrome releases.
Messenger will now encrypt your voice and video calls
by Paul Hill
Facebook has begun the rollout of end-to-end (E2E) encrypted voice and video calls, which it has been testing for a while, on its Messenger service. Messenger already provides E2E encryption on any written messages you send but this latest update will help secure the audio and visual aspect of your communications. Also included in this update are revamped controls for disappearing messages so users have a greater choice over how long they’d like their messages to stick around for.
To be clear, Facebook doesn’t encrypt your normal messages, instead, you need to tap the ‘i’ in the current chat and press ‘Go to secret conversation’; any messages sent there are encrypted. From today, these secret conversation windows will come with a call option and a video option, you must select these items from a secret conversation to benefit from the E2E encryption.
Facebook launched secret conversations five years ago but due to COVID-19, it has seen an uptick in the number of audio and video calls being made. For this reason, it decided it would be worthwhile offering these services from the secret conversation window to give users greater privacy and help people gain more trust in the Facebook brand.
Disappearing messages is another option unique to secret conversations. With today’s update, users will be given greater choice over how long they’d like to have a message stick around; you can now choose to have messages disappear from anywhere between five seconds and 24 hours.
In coming updates, Facebook will enable end-to-end encrypted group chats and calls in Messenger as well as opt-in end-to-end encryption for Instagram DMs. Some users may see these options before they’re released publicly and could begin showing up in just a matter of weeks.
By Usama Jawad96
Apple reveals more details about its child safety photo scanning technologies
by Usama Jawad
Apple has been the target of criticism since it revealed that it will be introducing some child safety features into its ecosystem which would allow scanning of Child Sexual Abuse Material (CSAM). An open letter demanding that Apple halts the deployment of this technology already has thousands of signatories. The firm had internally acknowledged that some people are worried about the new features, but said that this is due to misunderstandings that it will be addressing in due course. Today, it has made good on its promise.
In a six-page FAQs document that you can view here, Apple has emphasized that its photo scanning technology is split into two distinct use-cases.
The first has to do with detecting sexually explicit photos sent or received by children 12 years of age or younger via the Messages app. This capability uses on-device machine learning to automatically blur problematic images, inform children that they do not have to view the content, provide them guidance, and inform their parents if they still opt to view such images. In the same scenario, children aged 13-17 will be provided similar guidance but their parents will not be informed. In order for this flow to function, child accounts need to be set up in family settings on iCloud, the feature should be opted in to, and parental notifications need to be enabled for children.
No other entity including Apple or a law enforcement authority is informed if a child sends or receives sexually explicit images. As such, this does not break any existing privacy assurances or end-to-end encryption. Apple has emphasized that the feature is applicable to Messages only, which means that if a child is being abused, they can still still reach out for help via text or other communication channels.
The second prong of Apple's child safety approach is about keeping CSAM off iCloud Photos. In this case, hashes of iCloud images will be compared against known CSAM images, and the company will be notified if a match is detected. This feature does not work for private on-device images or if iCloud Photos is disabled.
The firm has emphasized that it does not download any CSAM images on your device to compare against. Instead, it computes hashes of your images and compares it to known CSAM content to determine a hit. Apple went on to say that:
The company has revealed other details about its end-to-end process for detecting CSAM images as well. It has stated that its system does not work for anything other than CSAM media, as even the possession of such images is illegal in many countries. That said, authorities are not automatically informed. If there is a match, Apple first conducts a human review before notifying authorities.
The Cupertino tech giant has bluntly stated that it will not add non-CSAM images to its repository for comparison, even if there is pressure from certain governments. In the same vein, Apple itself does not add hashes on top of known CSAM images, and since these are all stored on an OS-level, this means that specific individuals can't be targeted via misuse of the technology.
Finally, Apple has boasted that its system is extremely accurate and that the likelihood of a false positive is less than one per trillion images per year. Even in the worst case, there is a human reviewer in place as a safety net who performs a manual review of a flagged account before it is reported to the National Center for Missing and Exploited Children (NCMEC).
By Usama Jawad96
Apple VP on iCloud Photos scanning: We know people have misunderstandings and are worried
by Usama Jawad
Apple recently announced that it is introducing new child safety features to its ecosystem, including the ability to scan photos uploaded to iCloud using on-device machine learning and comparing their hashes to known images of child sexual abuse material (CSAM) from the National Center for Missing and Exploited Children's (NCMEC) repository. Another feature may also inform parents if their child - who is below 13 years old - shares or receives sexually explicit content.
The move has drawn criticism from a lot of tech experts and entities such as the Head of WhatsApp, the Electronic Frontier Foundation (EFF), Edward Snowden, and more, who call it a breach of privacy despite being well-intentioned. Apple is fully aware of the the debate it has created, as can be seen in an internal company memo.
The document in question was obtained by 9to5Mac and contains words from Apple's Software VP Sebastien Marineau-Mes. An excerpt from the memo reads:
The attached note from NCMEC congratulates Apple for its efforts and says that "we know that the days to come will be filled with the screeching voices of the minority".
Overall, it's clear that Apple is aware that it has opened up a somewhat difficult topic since it involves scanning photos of its users, computing hashes, and then comparing them against CSAM databases. While the company claims that it is doing this in a privacy-protective manner using on-device machine learning, many are understandably concerned about the potential for misuse. Marineau-Mes does say that the firm will be explaining the features in more detail "in the next few months", so it will be interesting to see whether it can tackle the concerns that are being raised by the public.
By Usama Jawad96
Zoom agrees to pay $86 million to settle class-action lawsuit
by Usama Jawad
While online collaboration and communication platforms have boomed in the past year or so due to the ongoing pandemic, Zoom has been facing an uphill battle In April 2020, a vulnerability in the software allowed attackers to steal Windows credentials, and many companies such as SpaceX, Google, and Standard Chartered banned its use. Although Zoom did fix the issues eventually, the damage had already been done and the company was sued by multiple entities. Now, it has reached a settlement in one such class-action lawsuit in the U.S.
Image via Zoom The BBC reports that the lawsuit in question claimed that Zoom did not safeguard the privacy of its users and shared their data with firms like Google, Facebook, and LinkedIn. It also took shots at the software's security features, saying that zoombombing was a serious issue and end-to-end encryption doesn't work either. It was filed on behalf of all paid and free subscribers in the U.S.
While Zoom did not admit to wrongdoing in any of the aforementioned accusations, it still reached a settlement worth $86 million with the plaintiffs. It also agreed to enhance its security features and to provide its staff trainings on best practices when handling sensitive data. Yet another provision is that the subscribers included in the class action should be offered 15% refunds on their subscriptions or $25, whichever is greater. A Zoom representative stated that:
It is important to note that the settlement has not been approved yet by U.S. District Judge Lucy Koh yet. The lawyers from the plaintiffs' side have also stated that they plan to seek a further $21.3 million from Zoom in lieu of legal fees.