The Glowworm Attack, as the discovery is called, follows similar research from the university published in 2020 that found an electro-optical sensor paired with a telescope was able to decipher the sounds in a room. Sound waves bounced off a hanging light bulb create nearly imperceptible changes in the lighting in the room. With the Glowworm Attack, the same technology that made Lamphone possible is repurposed to remotely eavesdrop on sounds in a room again, but using a completely different approach that many speaker makers apparently never even considered.
[…]
Pairing the sensor with a telescope allowed the security researchers at Ben-Gurion University to successfully capture and decipher sounds being played by a speaker at distances of up to 35 meters, or close to 115 feet. The results aren’t crystal clear (you can hear the remote recordings the researchers made on Ben Nassi’s website), and the noise increases the farther away from the speaker the capture device is used, but with some intelligent audio processing, the results can undoubtedly be improved.
The remote code execution flaw, CVE-2021-35395, was seen in Mirai malware binaries by threat intel firm Radware, which “found that new malware binaries were published on both loaders leveraged in the campaign.”
Warning that the vuln had been included in Dark.IoT’s botnet “less than a week” after it was publicly disclosed, Radware said: “This vulnerability was recently disclosed by IoT Inspectors Research Lab on August 16th and impacts IoT devices manufactured by 65 vendors relying on the Realtek chipsets and SDK.”
The critical vuln, rated 9.8 on the CVSS scale, consists of multiple routes to cause buffer overflows (PDF from Realtek with details) in the web management interface provided by Realtek in its Jungle SDK for its router chipset. CVE-2021-35395 is a denial-of-service vuln; crafted inputs from an attacker can be used to crash the HTTP server running the management interface, and thus the router.
[…]
Rather than having the capability to develop its own exploits, Dark.IoT sits around waiting for white hats to publish proof-of-concepts for newly discovered vulns, and Smith said they incorporate those into their botnet within “days.”
[…]
While Realtek has patched the vulns in the SDK, vendors using its white-label tech now have to distribute patches for their branded devices and then users have to install them – all while Dark.IoT and other Mirai-based criminals are looking for exploitable devices.
Leading Australian digital outdoor media company QMS, has unveiled its latest neuroscience study that demonstrates the relative impact of different Out of Home creative approaches and their overall effectiveness for brands.
In partnership with Neuro-Insight, this research study captured real-life, continuous digital and static OOH panels over consecutive days, to accurately measure how the human brain responds to a piece of creative advertising each day.
The study revealed that long term memory encoding, critical for campaign effectiveness, continues to grow in respondents that are exposed to evolving creative. In fact, creative that evolves was shown to deliver a 38% higher impact than that of static creative by day five.
Spanning 30 creatives across 15 categories, one of the strongest performing campaigns in the study harnessed the capabilities of digital OOH (DOOH) with a simple creative change that displayed the day of the week matched with the live temperature at the time, to deliver an 18% stronger result than the average DOOH campaign.
QMS Chief Strategy Officer, Christian Zavecz said that it was integral for both media owners and advertisers to properly understand the additional value the capabilities of DOOH delivers and how they can be used to drive greater campaign efficacy.
“DOOH in Australia already represents 61% of the industry* however, the uptake of creative capabilities amongst clients is still quite low. Now, for the first time, we can quantify what we have always intuitively thought about the medium. Incorporating the strategic use of creative evolution into a brand’s campaign is now proven to increase its effectiveness. The study also uncovered some important lessons about frequency and the role that DOOH, through its breadth of capabilities, can play in being able to maximise effective OOH campaign reach.”
QLED-loving thieves, beware: Samsung revealed on Tuesday that its TVs can be remotely disabled if the company finds out they’ve been stolen, so long as the sets in question are connected to the internet.
Known as “Samsung TV Block,” the feature was first announced in a press release earlier this month after the company deployed it following a string of warehouse lootings triggered by unrest in South Africa. In the release, Samsung said that the technology comes “already pre-loaded on all Samsung TV products,” and said that it “ensures that the television sets can only be used by the rightful owners with a valid proof of purchase.”
TV Block kicks in after the user of the stolen television connects it to the internet, which is necessary in order to operate the smart TVs. Once connected, the serial number of the television pings the Samsung server, triggering a blocking mechanism that effectively disables all of the TV’s functions.
While the release only mentions the blocking function relative to the TVs that had been looted from the company’s warehouse, the protection could also ostensibly be applied to individual customers who’ve had their TVs stolen and report the device’s serial number to Samsung.
This means that you could reroute the TVs to your own server and trigger the blocking mechanism yourself quite easily. Nice way to brick a whole load of Samsung TVs!
Facebook, Netflix and Google have all received reprimands or fines, and an order to make corrective action, from South Korea’s government data protection watchdog, the Personal Information Protection Commission (PIPC).
The PIPC announced a privacy audit last year and has revealed that three companies – Facebook, Netflix and Google – were in violations of laws and had insufficient privacy protection.
Facebook alone was ordered to pay 6.46 billion won (US$5.5M) for creating and storing facial recognition templates of 200,000 local users without proper consent between April 2018 and September 2019.
Another 26 million won (US$22,000) penalty was issued for illegally collecting social security numbers, not issuing notifications regarding personal information management changes, and other missteps.
Facebook has been ordered to destroy facial information collected without consent or obtain consent, and was prohibited from processing identity numbers without legal basis. It was also ordered to destroy collected data and disclose contents related to foreign migration of personal information. Zuck’s brainchild was then told to make it easier for users to check legal notices regarding personal information.
[…]
Netflix’s fine was a paltry 220 million won (US$188,000), with that sum imposed for collecting data from five million people without their consent, plus another 3.2 million won (US$2,700) for not disclosing international transfer of the data.
Google got off the easiest, with just a “recommendation” to improve its personal data handling processes and make legal notices more precise.
The PPIC said it is not done investigating methods of collecting personal information from overseas businesses and will continue with a legal review.
OnlyFans dropped plans to ban pornography from its service, less than a week after the U.K. content-creator subscription site had announced the change citing the need to comply with policies of banking partners.
On Wednesday, the company said it “secured assurances necessary to support our diverse creator community,” suggesting that it has new agreements with banks to pay OnlyFans’ content creators, including those who share sexually explicit material.
[…]
An OnlyFans spokesperson declined to say which bank or banks it has new or renewed payment-processing agreements with. “The proposed Oct. 1, 2021 changes are no longer required due to banking partners’ assurances that OnlyFans can support all genres of creators,” the rep said.
So was this all much ado about nothing?
OnlyFans may have been able to resolve its conflict with banks, some of which had refused to do business with the site, by going public with the issue — and publicizing the large amount of money that flows through the site, on the order of $300 million in payouts per month.
OnlyFans founder and CEO Tim Stokely put the blame for the porn ban on banks in an interview with the Financial Times published Aug. 24, saying that banks including JP Morgan Chase, Bank of New York Mellon and the U.K.’s Metro Bank had cut off OnlyFans’ ability to pay creators.
The furious backlash among OnlyFans creators also certainly pushed the company to quickly resolve the problem. OnlyFans’ decision to ban porn had infuriated sex workers who have relied on the site to support themselves. In frustration, some adult creators had already nixed their OnlyFans pages and moved to alternate platforms.
Infosec pros and other technically minded folk have just under a week left to comment on EU plans to introduce new regulations obligating consumer IoT device makers to address online security issues, data protection, privacy and fraud prevention.
Draft regulations applying to “internet-connected radio equipment and wearable radio equipment” are open for public comment until 27 August – and the resulting laws will apply across the bloc from the end of this year, according to the EU Commission.
Billed as assisting Internet of Things device security, the new regs will apply to other internet-connected gadgets in current use today, explicitly including “certain laptops” as well as “baby monitors, smart appliances, smart cameras and a number of other radio equipment”, “dongles, alarm systems, home automation systems” and more.
[…]
The Netherlands’ FME association has already raised public concerns about the scope of the EU’s plans, specifically raising the “feasibility of post market responsibility for cybersecurity”.
The trade association said: “If there is a low risk exploitable vulnerability; at what level can the manufacturer not release or delay a patch, and what documentation is required to demonstrate that this risk assessment was conducted with this outcome of a very low risk vulnerability?”
While there are certainly holes that can be picked in the draft regs, cheap and cheerful internet-connected devices pose a real risk to the wider internet because of the ease with which they can be hijacked by criminals.
[…]
Certain router makers have learned the hard way that end-of-life equipment that contain insecurities can have a reputational as well as security impact. That said, it’s perhaps unreasonable to expect kit makers to keep providing software patches for years after they’ve stopped shipping a device. Consumers cannot rely on news outlets shaming makers of internet-connected goods into providing better security; new laws are the inevitable next stage, and there’s a growing push for them on both sides of the Atlantic.
Device makers being banned from selling in the EU over security and data protection issues is not new. In 2017, the German telecoms regulator banned the sale of children’s smartwatches that allowed users to secretly listen in on nearby conversations and later that year, the French data protection agency issued a formal notice to a biz peddling allegedly insecure Bluetooth-enabled toys – Genesis Toys’ My Friend Cayla doll and the i-Que robot, because the doll could be misused to eavesdrop on kids. The manufacturers are also obliged to comply with the GDPR. However, the new draft law is evidence that certain loopholes might soon begin to close
New research shows that misconfigurations of a widely used web tool have led to the leaking of tens of millions of data records.
Microsoft’s Power Apps, a popular development platform, allows organizations to quickly create web apps, replete with public facing websites and related backend data management. A lot of governments have used Power Apps to swiftly stand up covid-19 contact tracing interfaces, for instance.
However, incorrect configurations of the product can leave large troves of data publicly exposed to the web—which is exactly what has been happening.
Researchers with cybersecurity firm UpGuard recently discovered that as many as 47 different entities—including governments, large companies, and Microsoft itself—had misconfigured their Power Apps to leave data exposed.
The list includes some very large institutions, including the state governments of Maryland and Indiana and public agencies for New York City, such as the MTA. Large private companies, including American Airlines and transportation and logistics firm J.B. Hunt, have also suffered leaks.
UpGuard researchers write that the troves of leaked data has included a lot of sensitive stuff, including “personal information used for COVID-19 contact tracing, COVID-19 vaccination appointments, social security numbers for job applicants, employee IDs, and millions of names and email addresses.”
[…]
Following UpGuard’s disclosures, Microsoft has since shifted permissions and default settings related to Power Apps to make the product more secure.
After facing criticism over the app’s recent decision to prohibit sexually explicit content starting in October, OnlyFans CEO Tim Stokely pointed the finger at banks for the policy change.
In an interview with the Financial Times published Tuesday, Stokely singled out a handful of banks for “unfair” treatment, saying they made it “difficult to pay our creators.”
The Belarusian Cyber Partisans, as the hackers call themselves, have in recent weeks released portions of a huge data trove they say includes some of the country’s most secret police and government databases. The information contains lists of alleged police informants, personal information about top government officials and spies, video footage gathered from police drones and detention centers and secret recordings of phone calls from a government wiretapping system, according to interviews with the hackers and documents reviewed by Bloomberg News.
A screenshot of footage the hackers obtained from inside Belarusian detention centers where protesters were held and allegedly beaten.
Source: Belarusian Cyber Partisans
Among the pilfered documents are personal details about Lukashenko’s inner circle and intelligence officers. In addition, there are mortality statistics indicating that thousands more people in Belarus died from Covid-19 than the government has publicly acknowledged, the documents suggest.
In an interview and on social media, the hackers said they also sabotaged more than 240 surveillance cameras in Belarus and are preparing to shut down government computers with malicious software named X-App.
[…]
the data exposed by the Cyber Partisans showed “that officials knew they were targeting innocent people and used extra force with no reason.” As a result, he said, “more people are starting to not believe in propaganda” from state media outlets, which suppressed images of police violence during anti-government demonstrations last year.
[…]
The hackers have teamed up with a group named BYPOL, created by former Belarusian police officers, who defected following the disputed election of Lukashenko last year. Mass demonstrations followed the election, and some police officers were accused of torturing and beating hundreds of citizens in a brutal crackdown.
[…]
The wiretapped phone recordings obtained by the hackers revealed that Belarus’s interior ministry was spying on a wide range of people, including police officers—both senior and rank-and-file—as well as officials working with the prosecutor general, according to Azarau. The recordings also offer audio evidence of police commanders ordering violence against protesters, he said.
[…]
Earlier this year, an affiliate of the group obtained physical access to a Belarus government facility and broke into the computer network while inside, the spokesman said. That laid the groundwork for the group to later gain further access, compromising some of the ministry’s most sensitive databases, he said. The stolen material includes the archive of secretly recorded phone conversations, which amounts to between 1 million and 2 million minutes of audio, according to the spokesman.
[…]
The hackers joined together in September 2020, after the disputed election. Their initial actions were small and symbolic, according to screenshots viewed by Bloomberg News. They hacked state news websites and inserted videos showing scenes of police brutality. They compromised a police “most wanted” list, adding the names of Lukashenko and his former interior minister, Yury Karayeu, to the list. And they defaced government websites with the red and white national flags favored by protesters over the official Belarusian red and green flag.
Those initial breaches attracted other hackers to the Cyber Partisans’ cause, and as it has grown, the group has become bolder with the scope of its intrusions. The spokesman said its aims are to protect the sovereignty and independence of Belarus and ultimately to remove Lukashenko from power.
[…]
Names and addresses of government officials and alleged informants obtained by the hackers have been shared with Belarusian websites, including Blackmap.org, that seek to “name and shame” people cooperating with the regime and its efforts to suppress peaceful protests, according to Viačorka and the websites themselves.
Samsung already makes it extremely difficult to have root access without tripping the security flags, and now the Korean OEM has introduced yet another roadblock for aftermarket development. In its latest move, Samsung disables the cameras on the Galaxy Z Fold 3 after you unlock the bootloader.
Knox is the security suite on Samsung devices, and any modifications to the device will trip it, void your warranty, and disable Samsung Pay permanently. Now, losing all the Knox-related security features is one thing, but having to deal with a broken camera is a trade-off that many will be unwilling to make. But that’s exactly what you’ll have to deal with if you wish to unlock the bootloader on the Galaxy Z Fold 3.
According to XDA Senior Members 白い熊 and ianmacd, the final confirmation screen during the bootloader unlock process on the Galaxy Z Fold 3 mentions that the operation will cause the camera to be disabled. Upon booting up with an unlocked bootloader, the stock camera app indeed fails to operate, and all camera-related functions cease to function, meaning that you can’t use facial recognition either. Anything that uses any of the cameras will time out after a while and give errors or just remain dark, including third-party camera apps.
Thanks to XDA Senior Member ianmacd for the images!
It is not clear why Samsung chose the way on which Sony walked in the past, but the actual problem lies in the fact that many will probably overlook the warning and unlock the bootloader without knowing about this new restriction. Re-locking the bootloader does make the camera work again, which indicates that it’s more of a software-level obstacle. With root access, it could be possible to detect and modify the responsible parameters sent by the bootloader to the OS to bypass this restriction. However, according to ianmacd, Magisk in its default state isn’t enough to circumvent the barrier.
By combining miniaturized electronics with some origami-inspired fabrication, scientists in Germany have developed what they say is the smallest microsupercapacitor in existence. Smaller than a speck of a dust but with a similar voltage to a AAA battery, the groundbreaking energy storage device is not only safe for use in the human body, but actually makes use of key ingredients in the blood to supercharge its performance.
[…]
These devices are known as biosupercapacitors and the smallest ones developed to date is larger than 3 mm3, but the scientists have made a huge leap forward in terms of how tiny biosupercapacitors can be. The construction starts with a stack of polymeric layers that are sandwiched together with a light-sensitive photo-resist material that acts as the current collector, a separator membrane, and electrodes made from an electrically conductive biocompatible polymer called PEDOT:PSS.
This stack is placed on a wafer-thin surface that is subjected to high mechanical tension, which causes the various layers to detach in a highly controlled fashion and fold up origami-style into a nano-biosupercapacitor with a volume 0.001 mm3, occupying less space than a grain of dust. These tubular biosupercapacitors are therefore 3,000 times smaller than those developed previously, but with a voltage roughly the same as an AAA battery (albeit with far lower actual current flow).
These tiny devices were then placed in saline, blood plasma and blood, where they demonstrated an ability to successfully store energy. The biosupercapacitor proved particularly effective in blood, where it retained up to 70 percent of its capacity after 16 hours of operation. Another reason blood may be a suitable home for the team’s biosupercapacitor is that the device works with inherent redox enzymatic reactions and living cells in the solution to supercharge its own charge storage reactions, boosting its performance by 40 percent.
Prof. Dr. Oliver G. Schmidt has led the development of a novel, tiny supercapacitor that is biocompatible
Jacob Müller
The team also subjected the device to the forces it might experience in blood vessels where flow and pressure fluctuate, by placing them in microfluidic channels, kind of like wind-tunnel testing for aerodynamics, where it stood up well. They also used three of the devices chained together to successfully power a tiny pH sensor, which could be placed in the blood vessels to measure pH and detect abnormalities that could be indicative of disease, such as a tumor growth.
[…] The new “Personal Information Protection Law of the People’s Republic of China” comes into effect on November 1st, 2021, and comprises eight chapters and 74 articles
[…]
The Cyberspace Administration of China (CAC) said, as translated from Mandarin using automated tools:
On the basis of relevant laws, the law further refines and perfects the principles and personal information processing rules to be followed in the protection of personal information, clarifies the boundaries of rights and obligations in personal information processing activities, and improves the work systems and mechanisms for personal information protection.
The document outlines standardized data-handling processes, defines rules on big data and large-scale operations, regulates those processing data, addresses data that flows across borders, and outlines legal enforcement of its provisions. It also clarifies that state agencies are not immune from these measures.
The CAC asserts that consenting to collection of data is at the core of China’s laws and the new legislation requires continual up-to-date fully informed advance consent of the individual. Parties gathering data cannot require excessive information nor refuse products or services if the individual disapproves. The individual whose data is collected can withdraw consent, and death doesn’t end the information collector’s responsibilities or the individual’s rights – it only passes down the right to control the data to the deceased subject’s family.
Information processors must also take “necessary measures to ensure the security of the personal information processed” and are required to set up compliance management systems and internal audits.
To collect sensitive data, like biometrics, religious beliefs, and medical, health and financial accounts, information needs to be necessary, for a specific purpose and protected. Prior to collection, there must be an impact assessment, and the individual should be informed of the collected data’s necessity and impact on personal rights.
Interestingly, the law seeks to prevent companies from using big data to prey on consumers – for example setting transaction prices – or mislead or defraud consumers based on individual characteristics or habits. Furthermore, large-scale network platforms must establish compliance systems, publicly self-report their efforts, and outsource data-protective measures.
And if data flows across borders, the data collectors must establish a specialized agency in China or appoint a representative to be responsible. Organizations are required to offer clarity on how data is protected and its security assessed.
Storing data overseas does not exempt a person or company from compliance to any of the Personal Information Protection Laws.
In the end, supervision and law enforcement falls to the Cyberspace Administration and relevant departments of the State Council.
It looks like China has had a good look at the EU Cybersecurity Act and enhanced on that. All this looks very good and of course even better that they mandate the Chinese governmental agencies to also follow this, but is it true? With all the governmental AI systems, cameras and facial recognition systems tracking ethnic minorities (such as the Uyghurs) and setting good behaviour scores, how will these be affected? Somehow I doubt they will dismantle the pervasive surveillance apparatus they have. So even if the laws sound excellent, the proof is in the pudding.
When you plug in one of these Razer peripherals, Windows will automatically download Razer Synapse, the software that controls certain settings for your mouse or keyboard. Said Razer software has SYSTEM privileges, since it launches from a Windows process with SYSTEM privileges.
But that’s not where the vulnerability comes into play. Once you install the software, Windows’ setup wizard asks which folder you’d like to save it to. When you choose a new location for the folder, you’ll see a “Choose a Folder” prompt. Press Shift and right-click on that, and you can choose “Open PowerShell window here,” which will open a new PowerShell window.
Because this PowerShell window was launched from a process with SYSTEM privileges, the PowerShell window itself now has SYSTEM privileges. In effect, you’ve turned yourself into an admin on the machine, able to perform any command you can think of in the PowerShell window.
This vulnerability was first brought to light on Twitter by user jonhat, who tried contacting Razer about it first, to no avail. Razer did eventually follow up, confirming a patch is in the works. Until that patch is available, however, the company is inadvertently selling tools that make it easy to hack millions of computers.
A well-known threat actor with a long list of previous breaches is selling private data that was allegedly collected from 70 million AT&T customers. We analyzed the data and found it to include social security numbers, date of birth, and other private information. The hacker is asking $1 million for the entire database (direct sell) and has provided RestorePrivacy with exclusive information for this report.
Update: AT&T has initially denied the breach in a statement to RestorePrivacy. The hacker has responded by saying, “they will keep denying until I leak everything.”
Hot on the heels of a massive data breach with T Mobile earlier this week, AT&T now appears to be in the spotlight. A well-known threat actor in the underground hacking scene is claiming to have private data from 70 million AT&T customers. The threat actor goes by the name of ShinyHunters and was also behind other previous exploits that affected Microsoft, Tokopedia, Pixlr, Mashable, Minted, and more.
The hacker posted the leak on an underground hacking forum earlier today, along with a sample of the data that we analyzed. The original post is below:
This is the original post offering the data for sale on a hacking forum.
We examined the data for this report and also reached out to the hacker who posted it for sale.
70 million AT&T customers could be at risk
In the original post that we discovered on a hacker forum, the user posted a relatively small sample of the data. We examined the sample and it appears to be authentic based on available public records. Additionally, the user who posted it has a history of major data breaches and exploits, as we’ll examine more below.
While we cannot yet confirm the data is from AT&T customers, everything we examined appears to be valid. Here is the data that is available in this leak:
Name
Phone number
Physical address
Email address
Social security number
Date of birth
Below is a screenshot from the sample of data available:
A selection of AT&T user data that is for sale.
In addition to the data above, the hacker also has accessed encrypted data from customers that include social security numbers and date of birth. Here is a sample that we examined:
The data is currently being offered for $1 million USD for a direct sell (or flash sell) and $200,000 for access that is given to others. Assuming it is legit, this would be a very valuable breach as other threat actors can likely purchase and use the information for exploiting AT&T customers for financial gain.
The problem with harvesting reams of sensitive data is that it presents a very tempting target for malicious hackers, enemy governments, and other wrongdoers. That hasn’t prevented anyone from collecting and storing all of this data, secure only in the knowledge this security will ultimately be breached.
The devices, known as HIIDE, for Handheld Interagency Identity Detection Equipment, were seized last week during the Taliban’s offensive, according to a Joint Special Operations Command official and three former U.S. military personnel, all of whom worried that sensitive data they contain could be used by the Taliban. HIIDE devices contain identifying biometric data such as iris scans and fingerprints, as well as biographical information, and are used to access large centralized databases. It’s unclear how much of the U.S. military’s biometric database on the Afghan population has been compromised.
At first, it might seem that this will only allow the Taliban to high-five each other for making the US government’s shit list. But it wasn’t just used to track terrorists. It was used to track allies.
While billed by the U.S. military as a means of tracking terrorists and other insurgents, biometric data on Afghans who assisted the U.S. was also widely collected and used in identification cards, sources said.
Epic Games’ objections to Google’s business practices became clearer on Thursday with the release of previously redacted accusations in the gaming giant’s lawsuit against the internet goliath.
Those accusations included details of a Google-run operation dubbed Project Hug that aimed to sling hundreds of millions of dollars at developers to get them to remain within Google Play; and a so-called Premiere Device Program that gave device makers extra cash if they ensured users could only get their apps from the Play store, locking out third-party marketplaces and incentivizing manufacturers not to create their own software souks.
[…]
As part of the litigation, Epic made some accusations under seal last month [PDF] because Google’s attorneys designated the allegations confidential, based on Google’s habit of keeping business arrangements secret.
But on Wednesday, Judge James Donato issued an order disagreeing with Google’s rationale and directing the redacted material to be made public.
“Google did not demonstrate how the unredacted complaints might cause it commercial harm, and permitting sealing on the basis of a party’s internal practices would leave the fox guarding the hen house,” the judge wrote [PDF].
The unredacted details, highlighted in a separate redlined filing [PDF] and incorporated into an amended complaint filed on Friday [PDF], suggest Google has gone to great lengths to discourage competing app stores and to keep developers from making waves.
For example, the documents explain how Google employs revenue-sharing and licensing agreements with Android partners (OEMs) to maintain Google Play as the dominant app store. One filing describes “Anti-Fragmentation Agreements” that prevent partners from modifying the Android operating system to offer app downloads in a way that competes with Google Play.
“Google’s documents show that it pushes OEMs into making Google Play the exclusive app store on the OEMs’ devices through a series of coercive carrots and sticks, including by offering significant financial incentives to those that do so, and withholding those benefits from those that do not,” the redlined complaint says .
These agreements allegedly included the Premiere Device Program, launched in 2019, to give OEMs financial incentives like 4 per cent, or more, of Google Search revenues and 3-6 per cent of Google Play spending on their devices in return for ensuring Google exclusivity and the lack of apps with APK install rights.
[…]
Google’s highest level execs, it’s claimed, suggested giving Epic Games a deal “worth up to $208m (incremental cost to Google of $147m) over three years” to keep the game maker compliant. And if Epic did not accept, the court filing alleges, “a senior Google executive proposed that Google ‘consider approaching Tencent,’ a company that owns a minority stake in Epic, ‘to either (a) buy Epic shares from Tencent to get more control over Epic,’ or ‘(b) join up with Tencent to buy 100 per cent of Epic.'”
The filing contends that in 2019 Google’s internal estimate was that the company could lose between $1.1bn and $6bn by 2022 if Android app stores operated by Amazon and Samsung gain traction. The Epic Games Store, it’s said, could have cost Google $350m during that period.
And this kind of nasty pressure is how monopolies strongarm their dominance
Court documents reveal that LG, Motorola, and HMD Global, which makes Nokia phones, are part of the Premier Device Program. Premier devices are effectively mandated to make Google’s services the “defaults for all key functions” for up to 90% of the manufacturer’s Android phones. This includes blocking apps with the ability to install APKs on the device, except for the app stores designed for and managed by the respective original equipment manufacturers (OEMs). In turn, Google promised a higher cut of search revenue earned on the device, raising the rate from 8% to 12%, which is not an insignificant increase. In some instances, Google also agreed to share up to 6% of the “Play spend” revenue from the Play Store, essentially how much money that phone made for Google based on the user’s interactions.
In addition to the other brands mentioned above, Xiaomi, Sony, Sharp, and BBK Electronics, which owns OnePlus, and overseas brands like Oppo and Vivo, were all involved in the program in varying capacities. Google even had contracts with carriers to dissuade them from launching app stores that would compete with Android’s app marketplace—explicitly demonstrating deep pockets prevent competition and innovation.
Distributed Denial of Secrets is a journalist 501(c)(3) non-profit devoted to enabling the free transmission of data in the public interest.
We aim to avoid political, corporate or personal leanings, to act as a beacon of available information. As a transparency collective, we don’t support any cause, idea or message beyond ensuring that information is available to those who need it most—the people.
display items that come from the same category as the target product, such as a board game matched with other board games, enhance the chances of a target product’s purchase. In contrast, consumers are less likely to buy the target product if it is mismatched with products from different categories, for example, a board game displayed with kitchen knives.
The study utilized eye-tracking—a sensor technology that makes it possible to know where a person is looking—to examine how different types of displays influenced visual attention. Participants in the study looked at their target product for the same amount of time when it was paired with similar items or with items from different categories; however, shoppers spent more time looking at the mismatched products, even though they were only supposed to be there “for display.”
“What is surprising is that when I asked people how much they liked the target products, their preferences didn’t change between display settings,” Karmarkar said. “The findings show that it is not about how much you like or dislike the item you’re looking at, it’s about your process for buying the item. The surrounding display items don’t seem to change how much attention you give the target product, but they can influence your decision whether to buy it or not.”
Karmarkar, who holds Ph.D.s in consumer behavior and neuroscience, says the findings suggests that seeing similar options on the page reinforces the idea to consumers that they’re making the right kind of decision to purchase an item that fits the category on display.
Online researchers say they have found flaws in Apple’s new child abuse detection tool that could allow bad actors to target iOS users. However, Apple has denied these claims, arguing that it has intentionally built safeguards against such exploitation.
It’s just the latest bump in the road for the rollout of the company’s new features, which have been roundly criticized by privacy and civil liberties advocates since they were initially announced two weeks ago. Many critics view the updates—which are built to scour iPhones and other iOS products for signs of child sexual abuse material (CSAM)—as a slippery slope towards broader surveillance.
The most recent criticism centers around allegations that Apple’s “NeuralHash” technology—which scans for the bad images—can be exploited and tricked to potentially target users. This started because online researchers dug up and subsequently shared code for NeuralHash as a way to better understand it. One Github user, AsuharietYgvar, claims to have reverse-engineered the scanning tech’s algorithm and published the code to his page. Ygvar wrote in a Reddit post that the algorithm was basically available in iOS 14.3 as obfuscated code and that he had taken the code and rebuilt it in a Python script to assemble a clearer picture of how it worked.
Problematically, within a couple of hours, another researcher said they were able to use the posted code to trick the system into misidentifying an image, creating what is called a “hash collision.”
[…]
However, “hash collisions” involve a situation in which two totally different images produce the same “hash” or signature. In the context of Apple’s new tools, this has the potential to create a false-positive, potentially implicating an innocent person for having child porn, critics claim. The false-positive could be accidental or intentionally triggered by a malicious actor.
[…]
ost alarmingly, researchers noted that it could be easily co-opted by a government or other powerful entity, which might repurpose its surveillance tech to look for other kinds of content. “Our system could easily be repurposed for surveillance and censorship,” writes Mayer and his research partner, Anunay Kulshrestha, in an op-ed in the Washington Post. “The design wasn’t restricted to a specific category of content; a service could simply swap in any content-matching data base, and the person using that service would be none the wiser.”
The researchers were “so disturbed” by their findings that they subsequently declared the system dangerous, and warned that it shouldn’t be adopted by a company or organization until more research could be done to curtail the potential dangers it presented. However, not long afterward, Apple announced its plans to roll out a nearly identical system to over 1.5 billion devices in an effort to scan for CSAM. The op-ed ultimately notes that Apple is “gambling with security, privacy and free speech worldwide” by implementing a similar system in such a hasty, slapdash way.
[…]
pple’s decision to launch such an invasive technology so swiftly and unthinkingly is a major liability for consumers. The fact that Apple says it has built safety nets around this feature is not comforting at all, he added.
“You can always build safety nets underneath a broken system,” said Green, noting that it doesn’t ultimately fix the problem. “I have a lot of issues with this [new system]. I don’t think it’s something that we should be jumping into—this idea that local files on your device will be scanned.” Green further affirmed the idea that Apple had rushed this experimental system into production, comparing it to an untested airplane whose engines are held together via duct tape. “It’s like Apple has decided we’re all going to go on this airplane and we’re going to fly. Don’t worry [they say], the airplane has parachutes,” he said.
In a new blog post for the International Monetary Fund, four researchers presented their findings from a working paper that examines the current relationship between finance and tech as well as its potential future. Gazing into their crystal ball, the researchers see the possibility of using the data from your browsing, search, and purchase history to create a more accurate mechanism for determining the credit rating of an individual or business. They believe that this approach could result in greater lending to borrowers who would potentially be denied by traditional financial institutions. At its heart, the paper is trying to wrestle with the dawning notion that the institutional banking system is facing a serious threat from tech companies like Google, Facebook, and Apple. The researchers identify two key areas in which this is true: Tech companies have greater access to soft-information, and messaging platforms can take the place of the physical locations that banks rely on for meeting with customers.
The concept of using your web history to inform credit ratings is framed around the notion that lenders rely on hard-data that might obscure the worthiness of a borrower or paint an unnecessarily dire picture during hard times. Citing soft-data points like “the type of browser and hardware used to access the internet, the history of online searches and purchases” that could be incorporated into evaluating a borrower, the researchers believe that when a lender has a more intimate relationship with the potential client’s history, they might be more willing to cut them some slack. […] But how would all this data be incorporated into credit ratings? Machine learning, of course. It’s black boxes all the way down. The researchers acknowledge that there will be privacy and policy concerns related to incorporating this kind of soft-data into credit analysis. And they do little to explain how this might work in practice.
Photos that are sent in messaging apps like WhatsApp or Telegram aren’t scanned by Apple. Still, if you don’t want Apple to do this scanning at all, your only option is to disable iCloud Photos. To do that, open the “Settings” app on your iPhone or iPad, go to the “Photos” section, and disable the “iCloud Photos” feature. From the popup, choose the “Download Photos & Videos” option to download the photos from your iCloud Photos library.
Screenshot: Khamosh Pathak
You can also use the iCloud website to download all photos to your computer. Your iPhone will now stop uploading new photos to iCloud, and Apple won’t scan any of your photos now.
Looking for an alternative? There really isn’t one. All major cloud-backup providers have the same scanning feature, it’s just that they do it completely in the cloud (while Apple uses a mix of on-device and cloud scanning). If you don’t want this kind of photo scanning, use local backups, NAS, or a backup service that is completely end-to-end encrypted.
The mysterious thief who stole $600m-plus in cryptocurrencies from Poly Network has been offered the role of Chief Security Advisor at the Chinese blockchain biz.
It’s been a rollercoaster ride lately for Poly Network. The outfit builds software that handles the exchange of crypto-currencies and other assests between various blockchains. Last week, it confirmed a miscreant had drained hundreds of millions of dollars in digital tokens from its platform by exploiting a security weakness in its design.
After Poly Network urged netizens, cryptoexchanges, and miners to reject transactions involving the thief’s wallet addresses, the crook started giving the digital money back – and at least $260m of tokens have been returned. The company said it has maintained communication with the miscreant, who is referred to as Mr White Hat.
“It is important to reiterate that Poly Network has no intention of holding Mr White Hat legally responsible, as we are confident that Mr White Hat will promptly return full control of the assets to Poly Network and its users,” the organization said.
“While there were certain misunderstandings in the beginning due to poor communication channels, we now understand Mr White Hat’s vision for Defi and the crypto world, which is in line with Poly Network’s ambitions from the very beginning — to provide interoperability for ledgers in Web 3.0.”
First, Poly Network offered him $500,000 in Ethereum as a bug bounty award. He said he wasn’t going to accept the money, though the reward was transferred to his wallet anyway. Now, the company has gone one step further and has offered him the position of Chief Security Advisor.
“We are counting on more experts like Mr White Hat to be involved in the future development of Poly Network since we believe that we share the vision to build a secure and robust distributed system,” it said in a statement. “Also, to extend our thanks and encourage Mr White Hat to continue contributing to security advancement in the blockchain world together with Poly Network, we cordially invite Mr White Hat to be the Chief Security Advisor of Poly Network.”
It’s unclear whether so-called Mr White Hat will accept the job offer or not. Judging by the messages embedded in Ethereum transactions exchanged between both parties, it doesn’t look likely at the moment. He still hasn’t returned $238m, to the best of our knowledge, and said he isn’t ready to hand over the keys to the wallet where the funds are stored. He previously claimed he had attacked Poly Network for fun and to highlight the vulnerability in its programming.
“Dear Poly, glad to see that you are moving things to the right direction! Your essays are very convincing while your actions are showing your distrust, what a funny game…I am not ready to publish the key in this week…,” according to one message he sent.
Zoom has agreed to pay $85 million to settle claims that it lied about offering end-to-end encryption and gave user data to Facebook and Google without the consent of users. The settlement between Zoom and the filers of a class-action lawsuit also covers security problems that led to rampant “Zoombombings.”
The proposed settlement would generally give Zoom users $15 or $25 each and was filed Saturday at US District Court for the Northern District of California. It came nine months after Zoom agreed to security improvements and a “prohibition on privacy and security misrepresentations” in a settlement with the Federal Trade Commission, but the FTC settlement didn’t include compensation for users.
As we wrote in November, the FTC said that Zoom claimed it offers end-to-end encryption in its June 2016 and July 2017 HIPAA compliance guides, in a January 2019 white paper, in an April 2017 blog post, and in direct responses to inquiries from customers and potential customers. In reality, “Zoom did not provide end-to-end encryption for any Zoom Meeting that was conducted outside of Zoom’s ‘Connecter’ product (which are hosted on a customer’s own servers), because Zoom’s servers—including some located in China—maintain the cryptographic keys that would allow Zoom to access the content of its customers’ Zoom Meetings,” the FTC said. In real end-to-end encryption, only the users themselves have access to the keys needed to decrypt content.
a vulnerability is lurking in numerous types of smart devices—including security cameras, DVRs, and even baby monitors—that could allow an attacker to access live video and audio streams over the internet and even take full control of the gadgets remotely. What’s worse, it’s not limited to a single manufacturer; it shows up in a software development kit that permeates more than 83 million devices, and over a billion connections to the internet each month.
The SDK in question is ThroughTek Kalay, which provides a plug-and-play system for connecting smart devices with their corresponding mobile apps. The Kalay platform brokers the connection between a device and its app, handles authentication, and sends commands and data back and forth. For example, Kalay offers built-in functionality to coordinate between a security camera and an app that can remotely control the camera angle. Researchers from the security firm Mandiant discovered the critical bug at the end of 2020, and they are publicly disclosing it today in conjunction with the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency.
“You build Kalay in, and it’s the glue and functionality that these smart devices need,” says Jake Valletta, a director at Mandiant. “An attacker could connect to a device at will, retrieve audio and video, and use the remote API to then do things like trigger a firmware update, change the panning angle of a camera, or reboot the device. And the user doesn’t know that anything is wrong.”
The flaw is in the registration mechanism between devices and their mobile applications. The researchers found that this most basic connection hinges on each device’s “UID,” a unique Kalay identifier. An attacker who learns a device’s UID—which Valletta says could be obtained through a social engineering attack, or by searching for web vulnerabilities of a given manufacturer—and who has some knowledge of the Kalay protocol can reregister the UID and essentially hijack the connection the next time someone attempts to legitimately access the target device. The user will experience a few seconds of lag, but then everything proceeds normally from their perspective.
The attacker, though, can grab special credentials—typically a random, unique username and password—that each manufacturer sets for its devices. With the UID plus this login the attacker can then control the device remotely through Kalay without any other hacking or manipulation. Attackers can also potentially use full control of an embedded device like an IP camera as a jumping-off point to burrow deeper into a target’s network.
By exploiting the flaw, an attacker could watch video feeds in real time, potentially viewing sensitive security footage or peeking inside a baby’s crib. They could launch a denial of service attack against cameras or other gadgets by shutting them down. Or they could install malicious firmware on target devices. Additionally, since the attack works by grabbing credentials and then using Kalay as intended to remotely manage embedded devices, victims wouldn’t be able to oust intruders by wiping or resetting their equipment. Hackers could simply relaunch the attack.
“The affected ThroughTek P2P products may be vulnerable to improper access controls,” CISA wrote in its Tuesday advisory. “This vulnerability can allow an attacker to access sensitive information (such as camera feeds) or perform remote code execution. … CISA recommends users take defensive measures to minimize the risk of exploitation of this vulnerability.”
[…]
To defend against exploitation, devices need to be running Kalay version 3.1.10, originally released by ThroughTek in late 2018, or higher. But even the current Kalay SDK version (3.1.5) does not automatically fix the vulnerability. Instead, ThroughTek and Mandiant say that to plug the hole manufacturers must turn on two optional Kalay features: the encrypted communication protocol DTLS and the API authentication mechanism AuthKey.
[…]
“For the past three years, we have been informing our customers to upgrade their SDK,” ThroughTek’s Chen says. “Some old devices lack OTA [over the air update] function which makes the upgrade impossible. In addition, we have customers who don’t want to enable the DTLS because it would slow down the connection establishment speed, therefore are hesitant to upgrade.”