Human speech may have a universal transmission rate: 39 bits per second

Italians are some of the fastest speakers on the planet, chattering at up to nine syllables per second. Many Germans, on the other hand, are slow enunciators, delivering five to six syllables in the same amount of time. Yet in any given minute, Italians and Germans convey roughly the same amount of information, according to a new study. Indeed, no matter how fast or slowly languages are spoken, they tend to transmit information at about the same rate: 39 bits per second, about twice the speed of Morse code.

“This is pretty solid stuff,” says Bart de Boer, an evolutionary linguist who studies speech production at the Free University of Brussels, but was not involved in the work. Language lovers have long suspected that information-heavy languages—those that pack more information about tense, gender, and speaker into smaller units, for example—move slowly to make up for their density of information, he says, whereas information-light languages such as Italian can gallop along at a much faster pace. But until now, no one had the data to prove it.

Scientists started with written texts from 17 languages, including English, Italian, Japanese, and Vietnamese. They calculated the information density of each language in bits—the same unit that describes how quickly your cellphone, laptop, or computer modem transmits information. They found that Japanese, which has only 643 syllables, had an information density of about 5 bits per syllable, whereas English, with its 6949 syllables, had a density of just over 7 bits per syllable. Vietnamese, with its complex system of six tones (each of which can further differentiate a syllable), topped the charts at 8 bits per syllable.

Next, the researchers spent 3 years recruiting and recording 10 speakers—five men and five women—from 14 of their 17 languages. (They used previous recordings for the other three languages.) Each participant read aloud 15 identical passages that had been translated into their mother tongue. After noting how long the speakers took to get through their readings, the researchers calculated an average speech rate per language, measured in syllables/second.

Some languages were clearly faster than others: no surprise there. But when the researchers took their final step—multiplying this rate by the bit rate to find out how much information moved per second—they were shocked by the consistency of their results. No matter how fast or slow, how simple or complex, each language gravitated toward an average rate of 39.15 bits per second, they report today in Science Advances. In comparison, the world’s first computer modem (which came out in 1959) had a transfer rate of 110 bits per second, and the average home internet connection today has a transfer rate of 100 megabits per second (or 100 million bits).

Source: Human speech may have a universal transmission rate: 39 bits per second | Science | AAAS

Hundreds of Millions of Facebook Users Phone Numbers Exposed

Facebook is staring down yet another security blunder, this time with an incident involving an exposed server containing hundreds of millions of phone numbers that were previously associated with accounts on its platform.

The situation appears to be pinned to a feature no longer enabled on the platform but allowed users to search for someone based on their phone number. TechCrunch’s Zack Whittaker first reported Wednesday that a server—which did not belong to Facebook but was evidently not password protected and therefore accessible to anyone who could find it—was discovered online by security researcher Sanyam Jain and found to contain records on more than 419 million Facebook users, including 133 records on users based in the U.S.

(A Facebook spokesperson disputed the 419 million figure in a call with Gizmodo, claiming the server contained “closer to half” of that number, but declined to provide a specific figure.)

According to TechCrunch, records contained on the server included a Facebook user’s phone number and individual Facebook ID. Using both, TechCrunch said it was able to cross-check them to verify records and additionally found that in some cases, records included a user’s country, name, and gender. The report stated that it’s unclear who scraped the data from Facebook or why. The Facebook spokesperson said that the company became aware of the situation a few days ago but would not specify an exact date.

Whittaker noted that having access to a user’s phone number could allow a bad actor to force-reset accounts linked to that number, and could further expose them to intrusions like spam calls or other abuse. But it could also allow a bad actor to pull up a host of private information on a person by inputting it into any number of public databases or with some legwork or by impersonation grant a hacker access to apps or even a bank account.

Source: Hundreds of Millions of Facebook Users Phone Numbers Exposed

More Than Half the Nation’s State Attorneys General Could Sign on to Antitrust Inquiry Against Google

The Washington Post reported on Tuesday that “more than half of the nation’s state attorneys general” have signed on to and are preparing an antitrust investigation against digital titan Google, with the paper writing the inquiry is “scheduled to be announced next week, marking a major escalation in U.S. regulators’ efforts to probe Silicon Valley’s largest companies.”

Details of the investigation remain hazy, but the Post reported that the effort is “expected” to be bipartisan and could involve over 30 state attorneys general. The states’ investigation is as of yet separate from another antitrust review currently being conducted by the Department of Justice, and it comes as both Democrats on the campaign trail and the Trump administration have amped up the pressure on tech giants (albeit for entirely different reasons). The Post wrote:

A smaller group of these state officials, representing the broader coalition, is expected to unveil the investigation at a Monday news conference in Washington, according to three people familiar with the matter who spoke on the condition of anonymity because they were not authorized to discuss a law enforcement proceeding on the record, cautioning the plans could change.

It is unclear whether some or all of the attorneys general also plan to open or announce additional probes into other tech giants, including Amazon and Facebook, which have faced similar U.S. antitrust scrutiny. The states’ effort is expected to be bipartisan and could include more than 30 attorneys general, one of the people said.

While it’s “unclear” whether any DOJ officials will join the attorneys general during the expected announcement next week, the Post wrote, the agency’s antitrust chief Makan Delrahim did say in August that the DOJ was coordinating with state inquiries into possible violations of antitrust law by tech firms. The feds are currently carrying out multiple such antitrust investigations, including Federal Trade Commission probes of Facebook (separate from the paltry $5 billion fine it levied on the company earlier this year) and Amazon and a DOJ probe of Apple.

As the Post noted, the states have more limited powers at their disposal than the feds, which can break up entire firms on the grounds of anticompetition law. However, states can join with the feds in court, as they did during the antitrust investigation into Microsoft in the 1990s, as well as tangle Google up in years of legal battles. Former Maryland attorney general Doug Gansler told the paper, “If multiple states—and I mean not just Democratic attorneys general but Republican attorneys general as well—are all looking into potential antitrust violations, one of the biggest effects might be to pressure the federal government to do a deeper dive.”

Source: More Than Half the Nation’s State Attorneys General Could Sign on to Antitrust Inquiry Against Google

Do those retail apps increase customer engagement and sales in all channels? In the US: Yes.

Researchers from Texas A&M University published new research in the INFORMS journal Marketing Science, which shows that retailers’ branded mobile apps are very effective in increasing customer engagement, increasing sales on multiple levels, not just on the retailer’s website, but also in its stores. At the same time, apps increase the rate of returns, although the increase in sales outweighs the return rates.

The study to be published in the September edition of the INFORMS journal Marketing Science is titled “Mobile App Introduction and Online and Offline Purchases and Product Returns,” and is authored by Unnati Narang and Ventakesh Shankar, both of the Mays Business School at Texas A&M University.

The study authors found that retail app users buy 33 percent more frequently, they buy 34 percent more items, and they spend 37 percent more than non-app user customers over 18 months after app launch.

At the same time, app users return products 35 percent more frequently, and they return 35 percent more items at a 41 percent increase in .

All factors considered, the researchers found that app users spend 36 percent more net of returns.

“Overall, we found that retail app users are significantly more engaged at every level of the retail experience, from making purchases to returning items,” said Narang. “Interestingly, we also found that app users tend to a more diverse set of items, including less popular products, than non-app users. This is particular helpful for long-tail products, such as video games and music.”

“For the retailer, the lesson is that having a retail app will likely increase customer engagement and expand the range of products being sold online and in store,” added Shankar. “We also found that some app users who make a purchase within 48 hours of actually using an app, tend to use it when they are physically close to the store of purchase. They are most likely to access the app for loyalty rewards, product details and notifications.”

Source: Do those retail apps increase customer engagement and sales in all channels?

Managers rated as highly emotionally intelligent are more ineffective and unpopular, research shows

Professor Nikos Bozionelos, of the EMLyon Business School, France, and Dr. Sumona Mukhuty, Manchester Metropolitan University, asked staff in the NHS to assess their managers’ emotional intelligence—defined as their level of empathy and their awareness of their own and others’ emotions.

The 309 managers were also assessed on the amount of effort they put into the job, the staff’s overall satisfaction with their manager, and how well they implemented change within the NHS system.

Professor Bozionelos told the British Academy of Management’s annual conference in Birmingham today [Wednesday 4 September 2019] that beyond a certain point managers rated as having high emotional intelligence were also scored as lower for most of the outcomes.

Those managers rated in the top 15 percent for emotional intelligence were evaluated lower that those who performed in the top 65 percent to 85 percent in the amount of effort they put into the job, and how satisfied their subordinates were with them.

The NHS was undergoing fundamental reorganization at the time of the study, and managers rated as most emotionally intelligent were scored as less effective at implementing this change, but highly for their continuing involvement in the process.

“Increases in emotional intelligence beyond a moderately high level are detrimental rather than beneficial in terms of leader’s effectiveness,” said Professor Bozionelos.

“Managers who were rated beyond a particular threshold are considered less effective, and their staff are less satisfied with them.

“Too much emotional intelligence is associated with too much empathy, which in turn may make a manager hesitant to apply measures that he or she feels will impose excessive burden or discomfort to subordinates.”

The research contradicted the general assumption that the more emotional intelligence in a manager the better, he said, which had led to “an upsurge in investment in emotional intelligence training programs for leaders.”

“Beyond a particular level, emotional intelligence may not add anything to many aspects of manager’s performance, and in fact may become detrimental. Simply considering that the more emotional the manager has the better it is may be an erroneous way of thinking.”

The researchers took into account a host of factors, such as leaders’ age and biological sex, in order to study the effects of in isolation.

Source: Managers rated as highly emotionally intelligent are more ineffective and unpopular, research shows

SpaceX Says a ‘Bug’ Prevented It From Receiving Warning of Possible ESA Satellite Collision. For the first time ESA had to unexpectedly avoid a satellite constellation.

The European Space Agency was forced to perform a “collision avoidance maneuver” to prevent its Aeolus spacecraft from potentially smashing into one of Elon Musk’s Starlink satellites, in what is quickly becoming an all-too-common occurrence. According to SpaceX, it never received the expected alert that a collision was possible.

ESA pumped out a series of tweets yesterday describing the incident, in which the Aeolus satellite “fired its thrusters, moving it off a collision course with a @SpaceX satellite in their #Starlink constellation” on Monday morning. Launched in August 2018, the Aeolus Earth science satellite monitors the planet’s wind from space, allowing for better weather predictions and climate modeling.

[…]

Experts in the ESA’s Space Debris Team “calculated the risk of collision between these two active satellites,” determining that the safest option for Aeolus was to increase its height and have it pass over the SpaceX satellite, according to an ESA tweet. It marked the first time the ESA had to perform “a collision avoidance manoeuvre’ to protect one of its satellites from colliding with a ‘mega constellation,’” noted the space agency.

[…]

But as the ESA tweeted yesterday, as “the number of satellites in orbit increases, due to ‘mega constellations’ such as #Starlink comprising hundreds or even thousands of satellites, today’s ‘manual’ collision avoidance process will become impossible…”

[…]

An ESA graphic identified the culprit as being Starlink 44. The maneuver was done a half-Earth-orbit before Aeolus’ closest approach to the Starlink satellite. Jeff Foust from SpaceNews provides more insight into the incident:

Holger Krag, director of ESA’s Space Safety Programme Office, said in a Sept. 3 email that the agency’s conjunction assessment team noticed the potential close approach about five days in advance, using data provided by the U.S. Air Force’s 18th Space Control Squadron. “We have informed SpaceX and they acknowledged,” he said. “Over the days the collision probability exceeded the decision threshold and we started the maneuver preparation and shared our plans with SpaceX. The decision to maneuver was then made the day before.”

The odds of a collision were calculated at 1 in 1,000, which was high enough to warrant the maneuver. ESA scientists assessed the threat using data gathered by the U.S. Air Force, along with the “operators’ own knowledge of spacecraft positions,” according to SpaceNews.

In a statement emailed to Gizmodo, a SpaceX spokesperson said the Starlink team “last exchanged an email with the Aeolus operations team on August 28, when the probability of collision was only in the [1 in 50,000 range], well below the [1 in 10,000] industry standard threshold and 75 times lower than the final estimate.”

Once the U.S. Air Force’s updates showed that the probability had increased to more than 1 in 10,000, “a bug in our on-call paging system prevented the Starlink operator from seeing the follow on correspondence on this probability increase,” according to the spokesperson, who said “SpaceX is still investigating the issue and will implement corrective actions…. had the Starlink operator seen the correspondence, we would have coordinated with ESA to determine best approach with their continuing with their maneuver or our performing a maneuver.”

Yikes. This incident reveals the flimsy and primitive state of space traffic management, in which a failed communication led to ESA having to act unilaterally on the issue.

Source: SpaceX Says a ‘Bug’ Prevented It From Receiving Warning of Possible Satellite Collision

Well done, Elon Musk, incompetence does it again.

Mozilla says Firefox won’t defang ad blockers – unlike Google Chrome, which is steadily removing your privacy from 3rd parties

On Tuesday, Mozilla said it is not planning to change the ad-and-content blocking capabilities of Firefox to match what Google is doing in Chrome.

Google’s plan to revise its browser extension APIs, known as Manifest v3, follows from the web giant’s recognition that many of its products and services can be abused by unscrupulous developers. The search king refers to its product security and privacy audit as Project Strobe, “a root-and-branch review of third-party developer access to your Google account and Android device data.”

In a Chrome extension, the manifest file (manifest.json) tells the browser which files and capabilities (APIs) will be used. Manifest v3, proposed last year and still being hammered out, will alter and limit the capabilities available to extensions.

Developers who created extensions under Manifest v2 may have to revise their code to keep it working with future versions of Chrome. That may not be practical or possible in all cases, though. The developer of uBlock Origin, Raymond Hill, has said his web-ad-and-content-blocking extension will break under Manifest v3. It’s not yet clear whether uBlock Origin can or will be adapted to the revised API.

The most significant change under Manifest v3 is the deprecation of the blocking webRequest API (except for enterprise users), which lets extensions intercept incoming and outgoing browser data, so that the traffic can be modified, redirected or blocked.

Firefox not following

“In its place, Google has proposed an API called declarativeNetRequest,” explains Caitlin Neiman, community manager for Mozilla Add-ons (extensions), in a blog post.

“This API impacts the capabilities of content blocking extensions by limiting the number of rules, as well as available filters and actions. These limitations negatively impact content blockers because modern content blockers are very sophisticated and employ layers of algorithms to not only detect and block ads, but to hide from the ad networks themselves.”

Mozilla offers Firefox developers the Web Extensions API, which is mostly compatible with the Chrome extensions platform and is supported by Chromium-based browsers Brave, Opera and Vivaldi. Those other three browser makers have said they intend to work around Google’s changes to the blocking webRequest API. Now, Mozilla says as much.

“We have no immediate plans to remove blocking webRequest and are working with add-on developers to gain a better understanding of how they use the APIs in question to help determine how to best support them,” said Neiman.

[…]

Google maintains, “We are not preventing the development of ad blockers or stopping users from blocking ads,” even as it acknowledges “these changes will require developers to update the way in which their extensions operate.”

Yet Google’s related web technology proposal two weeks ago to build a “privacy sandbox,” through a series of new technical specifications that would hinder anti-tracking mechanisms, has been dismissed as disingenuous “privacy gaslighting.”

On Friday, EFF staff technologist Bennett Cyphers, lambasted the ad biz for its self-serving specs. “Google not only doubled down on its commitment to targeted advertising, but also made the laughable claim that blocking third-party cookies – by far the most common tracking technology on the Web, and Google’s tracking method of choice – will hurt user privacy,” he wrote in a blog post.

Source: Mozilla says Firefox won’t defang ad blockers – unlike a certain ad-giant browser • The Register

REVEALED: Hundreds of words to avoid using online if you don’t want the government spying on you

The Department of Homeland Security has been forced to release a list of keywords and phrases it uses to monitor social networking sites and online media for signs of terrorist or other threats against the U.S.

The intriguing the list includes obvious choices such as ‘attack’, ‘Al Qaeda’, ‘terrorism’ and ‘dirty bomb’ alongside dozens of seemingly innocent words like ‘pork’, ‘cloud’, ‘team’ and ‘Mexico’.

Released under a freedom of information request, the information sheds new light on how government analysts are instructed to patrol the internet searching for domestic and external threats.

The words are included in the department’s 2011 Analyst’s Desktop Binder‘ used by workers at their National Operations Center which instructs workers to identify ‘media reports that reflect adversely on DHS and response activities’.

Department chiefs were forced to release the manual following a House hearing over documents obtained through a Freedom of Information Act lawsuit which revealed how analysts monitor social networks and media organisations for comments that ‘reflect adversely’ on the government.

However they insisted the practice was aimed not at policing the internet for disparaging remarks about the government and signs of general dissent, but to provide awareness of any potential threats.

As well as terrorism, analysts are instructed to search for evidence of unfolding natural disasters, public health threats and serious crimes such as mall/school shootings, major drug busts, illegal immigrant busts.

The list has been posted online by the Electronic Privacy Information Center – a privacy watchdog group who filed a request under the Freedom of Information Act before suing to obtain the release of the documents.

In a letter to the House Homeland Security Subcommittee on Counter-terrorism and Intelligence, the centre described the choice of words as ‘broad, vague and ambiguous’.

Threat detection: Released under a freedom of information request, the information sheds new light on how government analysts are instructed to patrol the internet searching for domestic and external threats

They point out that it includes ‘vast amounts of First Amendment protected speech that is entirely unrelated to the Department of Homeland Security mission to protect the public against terrorism and disasters.’

A senior Homeland Security official told the Huffington Post that the manual ‘is a starting point, not the endgame’ in maintaining situational awareness of natural and man-made threats and denied that the government was monitoring signs of dissent.

However the agency admitted that the language used was vague and in need of updating.

Spokesman Matthew Chandler told website: ‘To ensure clarity, as part of … routine compliance review, DHS will review the language contained in all materials to clearly and accurately convey the parameters and intention of the program.’

MIND YOUR LANGUAGE: THE LIST OF KEYWORDS IN FULL

List1

List

list3

Source: REVEALED: Hundreds of words to avoid using online if you don’t want the government spying on you | Daily Mail Online

Basically you’re being censored through the use of unnecessary, ubiquitous surveillance – by a democracy.

Scammer Successfully Deepfaked CEO’s Voice To Fool Underling Into Transferring $243,000

The CEO of an energy firm based in the UK thought he was following his boss’s urgent orders in March when he transferred funds to a third-party. But the request actually came from the AI-assisted voice of a fraudster.

The Wall Street Journal reports that the mark believed he was speaking to the CEO of his businesses’ parent company based in Germany. The German-accented caller told him to send €220,000 ($243,000 USD) to a Hungarian supplier within the hour. The firm’s insurance company, Euler Hermes Group SA, shared information about the crime with WSJ but would not reveal the name of the targeted businesses.

Euler Hermes fraud expert Rüdiger Kirsch told WSJ that the victim recognized his superior’s voice because it had a hint of a German accent and the same “melody.” This was reportedly the first time Euler Hermes has dealt with clients being affected by crimes that used AI mimicry.

Source: Scammer Successfully Deepfaked CEO’s Voice To Fool Underling Into Transferring $243,000

A way to repair tooth enamel

A team of researchers from Zhejiang University and Xiamen University has found a way to repair human tooth enamel. In their paper published in the journal Science Advances, the group describes their process and how well it worked when tested.

[…]

the researchers first created extremely tiny (1.5-nanometer diameter) clusters of calcium phosphate, the main ingredient of natural enamel. Each of the tiny clusters was then prepared with the triethylamine—doing so prevented the clusters from clumping together. The clusters were then mixed with a gel that was applied to a sample of crystalline hydroxyapatite—a material very similar to human enamel. Testing showed that the clusters fused with the stand-in, and in doing so, created a layer that covered the sample. They further report that the layer was much more tightly arranged than prior teams had achieved with similar work. They claim that such tightness allowed the new material to fuse with the old as a single layer, rather than multiple crystalline areas.

The team then carried out the same type of testing using real human teeth that had been treated with acid to remove the enamel. They report that within 48 hours of application, crystalline layers of approximately 2.7 micrometers had formed on the teeth. Close examination with a microscope showed that the had a fish-scale like structure very similar to that of natural enamel. Physical testing showed the enamel to be nearly identical to natural in strength and wear resistance.

The researchers note that more work is required before their technique can be used by dentists—primarily to make sure that it does not have any undesirable side effects.

Source: A way to repair tooth enamel

ESA satellite dodges a “mega constellation” – Musks cluster satellites

The European Space Agency (ESA) accomplished a first today: moving one of its satellites away from a potential collision with a “mega constellation”.

The constellation in question was SpaceX’s Starlink, and the firing of the thrusters of the Aeolus Earth observation satellite was designed to raise the orbit of the spacecraft to allow SpaceX’s satellite to pass beneath without risking a space slam.

The ESA operations team confirmed that this morning’s manoeuvre took place approximately half an orbit before the potential pileup. It also warned that, with further Starlink satellites in the pipeline and other constellations from the likes of Amazon due to launch, performing such moves manually would soon become impossible.

If plans to orbit thousands more satellites (to bring broadband to remote areas, or inflict it on air-travellers, for example) come to fruition, the ESA team reckons that things will need to be a lot more automated. Acronyms such as AI have been bandied around to create debris and constellation avoidance systems that move faster than the current human-based approach.

We contacted SpaceX to get its take on ESA’s antics, but nothing has yet emerged from Musk’s media orifice. If it does, we will update this article accordingly.

While this is a first for a “mega constellation”, ESA is well practiced at dodging satellites, although mostly dead ones (or debris.) In 2018, the boffins keeping track of things had to perform 28 manoeuvres. A swerve to miss an active spacecraft is, however, unusual.

Aeolus itself was launched on 22 August 2018, and is designed to acquire profiles of the Earth’s winds, handy for understanding the dynamics of weather and improving forecasting.

You can make your own joke about nervous squeaks of flatulence as scientists realised that the spacecraft, designed to spend just over three years in orbit, was headed toward a possible mash-up with one of Musk’s finest.

The incident serves as a timely reminder of the risks of flinging up thousands of small satellites to blanket the Earth with all manner of services. Keeping the things out of the way of each other and those spacecraft with more scientific goals will be an ever increasing challenge if the plans of Musk et al become a reality.

Source: Everyone remembers their first time: ESA satellite dodges a “mega constellation” • The Register

up to 2% of all Apple iPhones Hacked, says Google, and Breaks ALL messaging Encryption as well as sending location data

The potential impact of the latest attack on iPhones is massive, not to mention hugely concerning for every user of Apple’s famous smartphone.

That simply visiting a website can lead to your iPhone being hacked silently by some unknown party is worrying enough. But given that, according to Google researchers, it’s possible for the hackers to access encrypted messages on WhatsApp, iMessage, Telegram and others, the attacks undermine the security promised by those apps. It’s a stark reminder that should Apple’s iOS be compromised by hidden malware, encryption can be entirely undone. Own the operating system, own everything inside.

Among the trove of data released by Google researcher Ian Beer on the attacks was detail on the “monitoring implant” hackers installed on the iPhone. He noted that it had access to all the database files on the victim’s phone used by those end-to-end encrypted apps. Those databases “contain the unencrypted, plain-text of the messages sent and received using the apps.”

Today In: Innovation

The implant would also enable hackers to snoop on Gmail and Google Hangouts, contacts and photos. The hackers could also watch where users were going with a live GPS location tracker. And the malware stole the “keychain” where passwords, such as those for all remembered Wi-Fi points, are stored.

Shockingly, according to Beer, the hackers didn’t even bother encrypting the data they were stealing, making a further mockery of encrypted apps. “Everything is in the clear. If you’re connected to an unencrypted Wi-Fi network, this information is being broadcast to everyone around you, to your network operator and any intermediate network hops to the command and control server,” the Google researcher wrote. “This means that not only is the end-point of the end-to-end encryption offered by messaging apps compromised; the attackers then send all the contents of the end-to-end encrypted messages in plain text over the network to their server.”

Beer’s ultimate assessment is sobering: “The implant has access to almost all of the personal information available on the device, which it is able to upload, unencrypted, to the attacker’s server.”

And, Beer added, even once the iPhone has been cleaned of infection (which would happen on a device restart or with the patch applied), the information the hackers pilfered could be used to maintain access to people’s accounts. “Given the breadth of information stolen, the attackers may nevertheless be able to maintain persistent access to various accounts and services by using the stolen authentication tokens from the keychain, even after they lose access to the device.

Iphone users should upgrade to the latest iOS as soon as they can to get a patch for the flaw, which was fixed earlier this year. Apple did not comment.

[…]

Avraham said he’d analyzed many cases of attacks on iPhones and iPads. He said he wouldn’t be surprised if the number of remotely infected iOS devices was anywhere between 0.1% and 2% of all 1 billion iPhones in use. That’d be either 1 million or 20 million.

“The only way to fight back is to patch vulnerabilities used as part of exploit chains while strategic mitigations are developed. This cannot be done effectively solely by Apple without the help of the security community,” Avraham added.

“Unfortunately the security community cannot help much due to Apple’s own restrictions. The current sandbox policies do not allow security analysts to extract malware from the device even if the device is compromised.”

Source: Apple iPhone Hack Exposed By Google Breaks WhatsApp Encryption

Some of The World’s Most-Cited Scientists Have Been Citing Themselves Through Citation Farms

A new study has revealed an unsettling truth about the citation metrics that are commonly used to gauge scientists’ level of impact and influence in their respective fields of research.

Citation metrics indicate how often a scientist’s research output is formally referenced by colleagues in the footnotes of their own papers – but a comprehensive analysis of this web of linkage shows the system is compromised by a hidden pattern of behaviour that often goes unnoticed.

Specifically, among the 100,000 most cited scientists between 1996 to 2017, there’s a stealthy pocket of researchers who represent “extreme self-citations and ‘citation farms’ (relatively small clusters of authors massively citing each other’s papers),” explain the authors of the new study, led by physician turned meta-researcher John Ioannidis from Stanford University.

[…]

Among the 100,000 most highly cited scientists for the period of 1996 to 2017, over 1,000 researchers self-cited more than 40 percent of their total citations – and over 8,500 researchers had greater than 25 percent self-citations.

There’s no suggestion that any of these self-citations are necessarily or automatically unethical or unwarranted or self-serving in themselves. After all, in some cases, your own published scientific research may be the best and most relevant source to link to.

But the researchers behind the study nonetheless suggest that the prevalence of extreme cases revealed in their analysis debases the value of citation metrics as a whole – which are often used as a proxy of a scientist’s standing and output quality (not to mention employability).

“With very high proportions of self-citations, we would advise against using any citation metrics since extreme rates of self-citation may herald also other spurious features,” the authors write.

“These need to be examined on a case-by-case basis for each author, and simply removing the self-citations may not suffice.”

[…]

“When we link professional advancement and pay attention too strongly to citation-based metrics, we incentivise self-citation,” psychologist Sanjay Srivastava from the University of Oregon, who wasn’t involved in the study, told Nature.

“Ultimately, the solution needs to be to realign professional evaluation with expert peer judgement, not to double down on metrics.”

The findings are reported in PLOS Biology.

Source: Some of The World’s Most-Cited Scientists Have a Secret That’s Just Been Exposed

Don’t fly with your Explody MacBook!

Following an Apple notice that a “limited number” of 15-inch MacBook Pros may have faulty batteries that could potentially create a fire safety risk, multiple airlines have barred transporting Apple laptops in their checked luggage—in some cases, regardless of whether they fall under the recall.

Bloomberg reported Wednesday that Qantas Airways and Virgin Australia had joined the growing list of airlines enforcing policies around the MacBook Pros. In a statement by email, a spokesperson for Qantas told Gizmodo that “[u]ntil further notice, all 15 inch Apple MacBook Pros must be carried in cabin baggage and switched off for flight following a recall notice issued by Apple.”

Virgin Australia, meanwhile, said in a “Dangerous Goods” notice on its website that any MacBook model “must be placed in carry-on baggage only. No Apple MacBooks are permitted in checked in baggage until further notice.”

Apple in June announced a voluntary recall program for the affected models of 15-inch Retina display MacBook Pro, which it said were sold between September 2015 and February 2017. Apple said at the time it would fix affected models for free, adding that “[c]ustomer safety is always Apple’s top priority.”

Apple did not immediately return a request for comment about airline policies implemented in response to the recall.

Both Singapore Airlines and Thai Airways also recently instituted policies around the MacBook Pros. In a statement on its website over the weekend, Singapore Airlines said that passengers are prohibited from bringing affected models on its aircraft either in their carry-ons or in their checked luggage “until the battery has been verified as safe or replaced by the manufacturer.”

Bloomberg previously reported that airlines TUI Group Airlines, Thomas Cook Airlines, Air Italy, and Air Transat also introduced bans on the laptops. The cargo activity of all four is managed by Total Cargo Expertise, which reportedly said in an internal notice to its staff that the affected devices are “prohibited on board any of our mandate carriers.”

Both the Federal Aviation Administration and European Union Aviation Safety Agency said they had contacted airlines following Apple’s announcement regarding the recall. The FAA said that it alerted U.S. carriers to the issue in July.

Apple allows MacBook users to see if their devices are affected by inputting a serial number. While checking individual serial numbers for each and every device that comes through security checkpoints has the potential to slow service, banning all MacBooks either outright or in the cabin seems like a severe overreaction and, to be honest, a gigantic pain in the ass for customers.

Source: Airlines Are Banning MacBooks From Checked Luggage

I’d say removing macbooks from check in luggage and then looking if the serials are OK or not will take a stupid amount of time. Banning them from check in luggage makes perfect sense.

MIT Researchers Build Functional Carbon Nanotube Microprocessor

Scientists at MIT built a 16-bit microprocessor out of carbon nanotubes and even ran a program on it, a new paper reports.

Silicon-based computer processors seem to be approaching a limit to how small they can be scaled, so researchers are looking for other materials that might make for useful processors. It appears that transistors made from tubes of rolled-up, single-atom-thick sheets of carbon, called carbon nanotubes, could one day have more computational power while requiring less energy than silicon.

[…]

the MIT group, led by Gage Hills and Christian Lau, has now debuted a functional 16-bit processor called RV16X-NANO that uses carbon nanotubes, rather than silicon, for its transistors. The processor was constructed using the same industry-standard processes behind silicon chips—Shulaker explained that it’s basically just a silicon microprocessor with carbon nanotubes instead of silicon.

The processor works well enough to run HELLO WORLD, a program that simply outputs the phrase “HELLO WORLD” and is the first program that most coding students learn. Shulaker compared its performance to a processor you’d buy at hobby shop to control a small robot.

[…]

A small but notable fraction of carbon nanotubes act like conductors instead of semiconductors. Shulaker explained that study author Hills devised a technique called DREAM, where the circuits were specifically designed to work despite the presence of metallic nanotubes. And of course, the effort relied on the contribution of every member of the relatively small team. The researchers published their results in the journal Nature today.

[…]

Ultimately, the goal isn’t to erase the decades of progress made by silicon microchips—perhaps companies can integrate carbon nanotube pieces into existing architectures.

This is still a proof-of-concept. The team still hasn’t calculated the chip’s performance or whether it’s actually more energy efficient than silicon—the gains are based on projections. But Shulaker hopes that the team’s work will serve as a roadmap toward incorporating carbon nanotubes in computers for the future.

Source: MIT Researchers Build Functional Carbon Nanotube Microprocessor

MIT Researchers Design Robotic Thread that navigates Human Brains to clear clots

Robotics engineers at MIT have built a threadlike robot worm that can be magnetically steered to deftly navigate the extremely narrow and winding arterial pathways of the human brain. One day it could be used to quickly clear blockages and clots that contribute to strokes and aneurysms

[…]

Strokes are a leading cause of death and disability in the United States, but relieving blood vessel blockages within the first 90 minutes of treatment has been found to dramatically increase survival rates of patients. The process is a complicated one, however, requiring skilled surgeons to manually guide a thin wire through a patient’s arteries up into a damaged brain vessel followed by a catheter that can deliver treatments or simply retrieve a clot. Not only is there the potential for these wires to damage vessel linings as they inch through the body, but during the process, surgeons are exposed to excess radiation from a fluoroscope which guides them by generating x-ray images in real-time. There’s a lot of room for improvement.

Using their expertise in both water-based biocompatible hydrogels, and the use of magnets to manipulate simple machines, the MIT engineers created a robotic worm featuring a pliable nickel-titanium alloy core with memory shape characteristics so that when bent it returns to its original shape. The core was then coated in a rubbery paste that was embedded with magnetic particles, which was then wrapped in an outer coating of hydrogels allowing the robotic worm to glide through arteries and blood vessels without any friction that could potentially cause damage.

The robot was tested on a small obstacle course featuring a twisting path of small rings guided by a strong magnet that could be operated at enough distance to be placed outside a patient. The engineers also mocked up a life-size replica of a brain’s blood vessels and found that not only could the robot easily navigate that obstacle but that there was also the potential to upgrade it with additional tools like a delivery mechanism for clot reducing drugs. They even successfully replaced the worm’s metal core with an optical cable, so that once it reached its destination, it could deliver powerful laser pulses to help remove a blockage.

The robot would not only make the post-stroke procedure faster and faster, but it would also reduce the exposure to radiation that surgeons often have to endure. And while it was tested using a manually operated magnet to steer it, eventually machines could be built to control the position of the magnet (MRI machines already surround patients in intense magnetic fields) with improved accuracy, which would in turn further improve and accelerate the robot’s journey through a patient’s body.

Source: MIT Researchers Designed this Robotic Worm to Burrow Into Human Brains

A bit unsure why the original article is so down on the concept and wants to frame it negatively, but oh well.

Irish Teen Wins 2019 Google Science Fair For Removing Microplastics From Water

An Irish teenager just won $50,000 for his project focusing on extracting micros-plastics from water.

Google launched the Google Science Fair in 2011 where students ages 13 through 18 can submit experiments and their results in front of a panel of judges. The winner receives $50,000. The competition is also sponsored by Lego, Virgin Galactic, National Geographic and Scientific American.

Fionn Ferreira, an 18-year-old from West Cork, Ireland won the competition for his methodology to remove microplastics from water.

Microplastics are defined as having a diameter of 5nm or less and are too small for filtering or screening during wastewater treatment. Microplastics are often included in soaps, shower gels, and facial scrubs for their ability to exfoliate the skin. Microplastics can also come off clothing during normal washing.

These microplastics then make their way into waterways and are virtually impossible to remove through filtration. Small fish are known to eat microplastics and as larger fish eat smaller fish these microplastics are concentrated into larger fish species that humans consume.

Ferreira used a combination of oil and magnetite powder to create a ferrofluid in the water containing microplastics. The microplastics combined with the ferrofluid which was then extracted.

After the microplastics bound to the ferrofluid, Ferreira used a magnet to remove the solution and leave only water.

After 1,000 tests, the method was 87% effective in removing microplastics of all sorts from water. The most effective microplastic removed was that from a washing machine with the hardest to remove being polypropylene plastics.

With the confirmation of the methodology, Ferreira hopes to scale the technology to be able to implement at wastewater treatment facilities.

This would prevent the microplastics from ever reaching waterways and the ocean. While reduction in the use of microplastics is the ideal scenario, this methodology presents a new opportunity to screen for microplastics before they are consumed as food by fish.

At 18 Ferreira has an impressive array of accomplishments. He is the curator at the Schull Planetarium, speaks 3 languages fluently, won 12 previous science fair competitions, plays the trumpet in an orchestra and has a minor planet named after him by MIT.

Source: Irish Teen Wins 2019 Google Science Fair For Removing Microplastics From Water

Electric Dump Truck Produces More Energy Than It Uses

Electric vehicles are everywhere now. It’s more than just Leafs, Teslas, and a wide variety of electric bikes. It’s also trains, busses, and in this case, gigantic dump trucks. This truck in particular is being put to work at a mine in Switzerland, and as a consequence of having an electric drivetrain is actually able to produce more power than it consumes. (Google Translate from Portugese)

This isn’t some impossible perpetual motion machine, either. The dump truck drives up a mountain with no load, and carries double the weight back down the mountain after getting loaded up with lime and marl to deliver to a cement plant. Since electric vehicles can recover energy through regenerative braking, rather than wasting that energy as heat in a traditional braking system, the extra weight on the way down actually delivers more energy to the batteries than the truck used on the way up the mountain.

The article claims that this is the largest electric vehicle in the world at 110 tons, and although we were not able to find anything larger except the occasional electric train, this is still an impressive feat of engineering that shows that electric vehicles have a lot more utility than novelties or simple passenger vehicles.

Source: Electric Dump Truck Produces More Energy Than It Uses | Hackaday

IBM open sources Adverserial Robustness 360 toolbox for AI

This is a library dedicated to adversarial machine learning. Its purpose is to allow rapid crafting and analysis of attacks and defense methods for machine learning models. ART provides an implementation for many state-of-the-art methods for attacking and defending classifiers.

Documentation for ART: https://adversarial-robustness-toolbox.readthedocs.io

https://github.com/IBM/adversarial-robustness-toolbox

IBM releases AI Fairness 360 tool open source

The AI Fairness 360 toolkit is an open-source library to help detect and remove bias in machine learning models. The AI Fairness 360 Python package includes a comprehensive set of metrics for datasets and models to test for biases, explanations for these metrics, and algorithms to mitigate bias in datasets and models.

The AI Fairness 360 interactive experience provides a gentle introduction to the concepts and capabilities. The tutorials and other notebooks offer a deeper, data scientist-oriented introduction. The complete API is also available.

Being a comprehensive set of capabilities, it may be confusing to figure out which metrics and algorithms are most appropriate for a given use case. To help, we have created some guidance material that can be consulted.

https://github.com/IBM/AIF360

IBM releases AI Explainability tools

The AI Explainability 360 toolkit is an open-source library that supports interpretability and explainability of datasets and machine learning models. The AI Explainability 360 Python package includes a comprehensive set of algorithms that cover different dimensions of explanations along with proxy explainability metrics.

The AI Explainability 360 interactive experience provides a gentle introduction to the concepts and capabilities by walking through an example use case for different consumer personas. The tutorials and example notebooks offer a deeper, data scientist-oriented introduction. The complete API is also available.

There is no single approach to explainability that works best. There are many ways to explain: data vs. model, directly interpretable vs. post hoc explanation, local vs. global, etc. It may therefore be confusing to figure out which algorithms are most appropriate for a given use case. To help, we have created some guidance material and a chart that can be consulted.

Github link

ITER is making a mini sun to power the earth

In southern France, 35 nations are collaborating to build the world’s largest tokamak, a magnetic fusion device that has been designed to prove the feasibility of fusion as a large-scale and carbon-free source of energy based on the same principle that powers our Sun and stars.
The experimental campaign that will be carried out at ITER is crucial to advancing fusion science and preparing the way for the fusion power plants of tomorrow.
ITER will be the first fusion device to produce net energy. ITER will be the first fusion device to maintain fusion for long periods of time. And ITER will be the first fusion device to test the integrated technologies, materials, and physics regimes necessary for the commercial production of fusion-based electricity.
Thousands of engineers and scientists have contributed to the design of ITER since the idea for an international joint experiment in fusion was first launched in 1985. The ITER Members—China, the European Union, India, Japan, Korea, Russia and the United States—are now engaged in a 35-year collaboration to build and operate the ITER experimental device, and together bring fusion to the point where a demonstration fusion reactor can be designed.
[…]
Three conditions must be fulfilled to achieve fusion in a laboratory: very high temperature (on the order of 150,000,000° Celsius); sufficient plasma particle density (to increase the likelihood that collisions do occur); and sufficient confinement time (to hold the plasma, which has a propensity to expand, within a defined volume).


At extreme temperatures, electrons are separated from nuclei and a gas becomes a plasma—often referred to as the fourth state of matter. Fusion plasmas provide the environment in which light elements can fuse and yield energy.


In a tokamak device, powerful magnetic fields are used to confine and control the plasma.

[…]

The tokamak is an experimental machine designed to harness the energy of fusion. Inside a tokamak, the energy produced through the fusion of atoms is absorbed as heat in the walls of the vessel. Just like a conventional power plant, a fusion power plant will use this heat to produce steam and then electricity by way of turbines and generators.

The heart of a tokamak is its doughnut-shaped vacuum chamber. Inside, under the influence of extreme heat and pressure, gaseous hydrogen fuel becomes a plasma—the very environment in which hydrogen atoms can be brought to fuse and yield energy. (You can read more on this particular state of matter here.) The charged particles of the plasma can be shaped and controlled by the massive magnetic coils placed around the vessel; physicists use this important property to confine the hot plasma away from the vessel walls. The term “tokamak” comes to us from a Russian acronym that stands for “toroidal chamber with magnetic coils.”

First developed by Soviet research in the late 1960s, the tokamak has been adopted around the world as the most promising configuration of magnetic fusion device. ITER will be the world’s largest tokamak—twice the size of the largest machine currently in operation, with ten times the plasma chamber volume.

[…]
Taken together, the ITER Members represent three continents, over 40 languages, half of the world’s population and 85 percent of global gross domestic product. In the offices of the ITER Organization and those of the seven Domestic Agencies, in laboratories and in industry, literally thousands of people are working toward the success of ITER.
[…]
ITER’s First Plasma is scheduled for December 2025.


That will be the first time the machine is powered on, and the first act of ITER’s multi-decade operational program.


On a cleared, 42-hectare site in the south of France, building has been underway since 2010. The ground support structure and the seismic foundations of the ITER Tokamak are in place and work is underway on the Tokamak Complex—a suite of three buildings that will house the fusion experiments. Auxiliary plant buildings such as the ITER cryoplant, the radio frequency heating building, and facilities for cooling water, power conversion, and power supply are taking shape all around the central construction site.

[…]

ITER Timeline


2005
Decision to site the project in France
2006
Signature of the ITER Agreement
2007
Formal creation of the ITER Organization
2007-2009
Land clearing and levelling
2010-2014
Ground support structure and seismic foundations for the Tokamak
2012
Nuclear licensing milestone: ITER becomes a Basic Nuclear Installation under French law

2014-2021
Construction of the Tokamak Building (access for assembly activities in 2019)
2010-2021
Construction of the ITER plant and auxiliary buildings for First Plasma
2008-2021
Manufacturing of principal First Plasma components
2015-2023
Largest components are transported along the ITER Itinerary

2020-2025
Main assembly phase I
2022
Torus completion
2024
Cryostat closure
2024-2025
Integrated commissioning phase (commissioning by system starts several years earlier)
Dec 2025
First Plasma
2026
Begin installation of in-vessel components
2035
Deuterium-Tritium Operation begins

Throughout the ITER construction phase, the Council will closely monitor the performance of the ITER Organization and the Domestic Agencies through a series of high-level project milestones. See the Milestones page for a series of incremental milestones on the way to First Plasma.

Source: What is ITER?

From the FAQ: The EU seems to be paying $17bn (and is responsible for almost half the project costs). There is around $1bn in deactivation and decomissioning costs, making the total around $35bn – as far as they can figure out. That’s a staggering science project!

Lenovo Solution Centre can turn users into Admins – Lenovo changes end of life for LSC until before the last release in response.

Not only has a vulnerability been found in Lenovo Solution Centre (LSC), but the laptop maker fiddled with end-of-life dates to make it seem less important – and is now telling the world it EOL’d the vulnerable monitoring software before its final version was released.

The LSC privilege-escalation vuln (CVE-2019-6177) was found by Pen Test Partners (PTP), which said it has existed in the code since it first began shipping in 2011. It was bundled with the vast majority of the Chinese manufacturer’s laptops and other devices, and requires Windows to run. If you removed the app, or blew it away with a Linux install, say, you’re safe right now.

[…]

he solution? Uninstall Lenovo Solution Centre, and if you’re really keen you can install Lenovo Vantage and/or Lenovo Diagnostics to retain the same branded functionality, albeit without the priv-esc part.

All straightforward. However, it went a bit awry when PTP reported the vuln to Lenovo. “We noticed they had changed the end-of-life date to make it look like it went end of life even before the last version was released,” they told us.

Screenshots of the end-of-life dates – initially 30 November 2018, and then suddenly April 2018 after the bug was disclosed – can be seen on the PTP blog. The last official release of the software is dated October 2018, so Lenovo appears to have moved the EOL date back to April of that year for some reason.

Source: Security gone in 600 seconds: Make-me-admin hole found in Lenovo Windows laptop crapware. Delete it now • The Register

Why do tech companies file so many weird patents?

There are lots of reasons to patent something. The most obvious one is that you’ve come up with a brilliant invention, and you want to protect your idea so that nobody can steal it from you. But that’s just the tip of the patent strategy iceberg. It turns out there is a whole host of strategies that lead to “zany” or “weird” patent filings, and understanding them offers a window not just into the labyrinthine world of the U.S. Patent and Trademark Office and its potential failings, but also into how companies think about the future. And while it might be fun to gawk at, say, Motorola patenting a lie-detecting throat tattoo, it’s also important to see through the eye-catching headlines and to the bigger issue here: Patents can be weapons and signals. They can spur innovation, as well as crush it.

Let’s start with the anatomy of a patent. Patents have many elements—the abstract, a summary, a background section, illustrations, and a section called “claims.” It’s crucial to know that the thing that matters most in a patent isn’t the abstract, or the title, or the illustrations. It’s the claims, where the patent filer has to list all the new, innovative things that her patent does and why she in fact deserves government protection for her idea. It’s the claims that matter over everything else.

[…]

For a long time, companies didn’t really worry about the PR that patents might generate. Mostly because nobody was looking. But now, journalists are using patents as a window into a company’s psyche, and not always in a way that makes these companies look good.

So why patent something that could get you raked across the internet coals? In many cases, when a company files for a patent, it has no idea whether it’s actually going to use the invention. Often, patents are filed as early as possible in an idea’s life span. Which means that at the moment of filing, nobody really knows where a field might go or what the market might be for something. So companies will patent as many ideas as they can at the early stages and then pick and choose which ones actually make sense for their business as time goes by.

[…]

In some situations, companies file for patents to blanket the field—like dogs peeing on every bush just in case. Many patents are defensive, a way to keep your competitors from developing something more than a way to make sure you can develop that thing. Will Amazon ever make a delivery blimp? Probably not, but now none of its competitors can. (Amazon seems to be a leader in these patent oddities. Its portfolio also includes a flying warehouse, self-destructing drones, an underwater warehouse, and a drone tunnel.

[…]

David Stein, a patent attorney, says that he sees this at companies he works with. He tells me that once he was in a meeting with inventors about something they wanted to patent, and he asked one of his standard questions to help him prepare the patent: What products will this invention go into? “And they said, ‘Oh, it won’t.’ ” The team that had invented this thing had been disbanded, and the company had moved to a different solution. But they had gone far enough with the patent application that they might as well keep going, if only to use the patent in the future to keep their competitors from gaining an advantage. (It’s almost impossible to know how many patents wind up being “useful” to a company or turn up in actual products.)

As long as you have a budget for it (and patents aren’t cheap—filing for one can easily cost more than $10,000 all told), there’s an incentive for companies to amass as many as they can. Any reporter can tell you that companies love to boast about the number of patents they have, as if it’s some kind of quantitative measure of brilliance. (This makes about as much sense as boasting about how many lines of code you’ve written—it doesn’t really matter how much you’ve got, it matters if it actually works.) “The number of patents a company is filing has more to do with the patent budget than with the amount they’re actually investing in research,” says Lisa Larrimore Ouellette, a professor at Stanford Law School

[…]

This patent arm wrestling doesn’t just provide low-hanging fruit to reporters. It also affects business dealings. Let’s say you have two companies that want to make some kind of business deal, Charles Duan, a patent expert at the R Street Institute, says. One of their key negotiation points might be patents. If two giant companies want to cut a deal that involves their patent portfolios, nobody is going to go through and analyze every one of those patents to make sure they’re actually useful or original, Duan says, since analyzing a single patent thoroughly can cost thousands of dollars in legal fees. So instead of actually figuring out who has the more valuable patents, “the [company] with more patents ends up getting more. I’m not sure there’s honestly much more to it.”

Several people I spoke with for this story described patent strategy as “an arms race” in which businesses all want to amass as many patents as they can to protect themselves and bolster their position in these negotiations. “There’s not that many companies that are willing to engage in unilateral disarmament,”

[…]

While disarmament might be unlikely, many companies have chosen not to engage in the patent warfare at all. In fact, companies often don’t patent technologies they’re most interested in. A patent necessarily lays out how your product works, information that not all companies want to divulge. “We have essentially no patents in SpaceX,” Elon Musk told Chris Anderson at Wired. “Our primary long-term competition is in China. If we published patents, it would be farcical, because the Chinese would just use them as a recipe book.”

[…]

In most cases, once the inventors and engineers hand over their ideas and answer some questions, it’s the lawyer’s job to build those things out into an actual patent. And here is where a lot of the weirdness actually enters the picture, because the lawyer essentially has to get creative. “You dress up science fiction with words like ‘means for processing’ or ‘data storage device,’ ” says Mullin.

Even the actual language of the patents themselves can be misleading. It turns you actually can write fan fiction about your own invention in a patent. Patent applications can include what are called “prophetic examples,” which are descriptions of how the patent might work and how you might test it. Those prophetic examples can be as specific as you want, despite being completely fictional. Patents can legally describe a “46-year-old woman” who never existed and say that her “blood pressure is reduced within three hours” when that never actually happened. The only rule about prophetic examples is that they cannot be written in the past tense. Which means that when you’re reading a patent, the examples written in the present tense could be real or completely made up. There’s no way to know.

If this sounds confusing, it is, and not just to journalists trying to wade through these documents. Ouellette, who published a paper in Science about this problem recently, admitted that even she wouldn’t necessarily be able to tell whether experiments described in a patent had actually been conducted.

Some people might argue that these kinds of speculative patents are harmless fun, the result of a Kafkaesque kaleidoscope of capitalism, competition, and bureaucracy. But it’s worth thinking about how they can be misused, says Mullin. Companies that are issued vague patents can go after smaller entities and try to extract money from them. “It’s like beating your competitor over the head with a piece of science fiction you wrote,” he says.

Plus, everyday people can be misled about just how much to trust a company based on its patents. One study found that out of 100 patents cited in scientific articles or books that used only prophetic examples (in other words, had no actual data or evidence in them), 99 were inaccurately described as having been based on real data.

[…]

Stein says that recently he’s had companies bail on patents because they might be perceived as creepy. In fact, in one case, Stein says that the company even refiled a patent to avoid a PR headache.* As distrust of technology corporations mounts, the way we read patents has changed. “Everybody involved in the patent process is a technologist. … We don’t tend to step back and think, this could be perceived as something else by people who don’t trust us.” But people are increasingly unwilling to give massive tech companies the benefit of the doubt. This is why Google’s patent for a “Gaze tracking system” got pushback—do you really want Google to know exactly what you look at and for how long?

[…]

there is still real value in reading the patents that companies apply for—not because doing so will necessarily tell you what they’re actually going to make, but because they tell you what problems the company is trying to solve. “They’re indicative of what’s on the engineer’s mind,” says Duan. “They’re not going to make the cage, but it does tell you that they’re worried about worker safety.” Spotify probably won’t make its automatic parking finder, so you don’t have to pause your music in a parking garage while you hunt for a spot. But it does want to figure out how to reduce interruptions in your music consumption. So go forth and read patents. Just remember that they’re often equal parts real invention and sci-fi.

Source: Why do tech companies file so many weird patents?

That science fiction concepts can be patented is new for me. So you can whack companies around with patents that you thought of but didn’t implement. Sounds like a really good idea. Not.