Five Minute Blockchain – No. 55
16.05.2023
Estimated reading time: 7 min 55 seconds
QUOTE OF THE WEEK:
TRUST
Transparency and safety: European Parliament preparing a set of rules for AI and surveillance technology
In the past, rulings and regulations for new technologies lagged, sometimes for years. But with Artificial Intelligence (AI), things seem to move faster. Last week two committees of the European Parliament adopted a draft of far-ranging rules for the use of AI and surveillance technology.
Once approved, these could become the “world’s first rules on Artificial Intelligence”. The rules include the right to complain about AI systems and provide a handle for system changes. Another element is the proposed ban on “biometric surveillance, emotion recognition, predictive policing AI systems.”
In a press release, European Parliament said the goal is to ensure a human-centric and ethical development of AI in Europe.
“MEPs aim to ensure that AI systems are overseen by people, are safe, transparent, traceable, non-discriminatory, and environmentally friendly.”
“AI systems with an unacceptable level of risk to people’s safety would be strictly prohibited, including systems that deploy subliminal or purposefully manipulative techniques, exploit people’s vulnerabilities or are used for social scoring (classifying people based on their social behaviour, socio-economic status, personal characteristics).”
Below is a list of technologies and practices which would be banned under the future EU rules:
- “Real-time” remote biometric identification systems in publicly accessible spaces;
- “Post” remote biometric identification systems, with the only exception of law enforcement for the prosecution of serious crimes and only after judicial authorization;
- Biometric categorisation systems using sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion, political orientation);
- Predictive policing systems (based on profiling, location or past criminal behaviour);
- Emotion recognition systems in law enforcement, border management, workplace, and educational institutions; and
- Indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases (violating human rights and right to privacy).
European Parliament Press Release
Google deploys pass keys
Google is extending the roll-out of pass keys to all Google accounts. This is part of a broader move away from hackable/often insecure “12345” passwords, with similar pushes towards more security expected from Apple, Microsoft and others.
From a Google Press Release:
“Passkeys are a new way to sign in to apps and websites. They’re both easier to use and more secure than passwords, so users no longer need to rely on the names of pets, birthdays or the infamous “password123.” Instead, passkeys let users sign in to apps and sites the same way they unlock their devices: with a fingerprint, a face scan or a screen lock PIN. And, unlike passwords, passkeys are resistant to online attacks like phishing, making them more secure than things like SMS one-time codes.”
Conference: Future of AI
On 29 June 2023, Horizon Europe research projects AI4media, AI4Trust, TITAN and vera.ai – in cooperation with the European Commission – host a one-day event focusing on various facets of Artificial Intelligence and the disinformation landscape. Full title: Meet the Future of AI: Countering Sophisticated & Advanced Disinformation.
CONTENT
Media company Vice files for bankruptcy
The media start-up was once valued at $5,7bn but had recently seen less revenue from digital advertising. The company websites will keep operating until a buyer is found.
“Investments from media titans like Disney and shrewd financial investors like TPG, which spent hundreds of millions of dollars, will be rendered worthless by the bankruptcy, cementing Vice’s status among the most notable bad bets in the media industry.”
Employees at Microsoft like the new training videos as much as a Netflix series
Training videos are usually dull – you must watch them, but it is not easy. Someone or a team at Microsoft has found a better way: Training videos that are so interesting and well-done that people like to see them.
The series is called “Trust Code” and is now in its 7th season. The main character is played by an aspiring actor named Devin Badoo- a star, at least among many of the 220.000 Microsoft workers:
“For employees at most companies, sitting through training videos every year is about as welcome as a toothache. “Trust Code,” with its recurring characters and end-of-season cliffhangers, is redefining the genre. Since launching in 2017, it has inspired watch parties, viral memes and T-shirts with Mr. Badoo’s image.”
AI Update
Last week Google had its big annual developer conference and launched several significant updates for AI technology in the search platform.
Here are three links to get you a quick update here:
Nearly half of YouTube views in the US are on TVs
YouTube is growing. There is a new generation of content creators, resulting in a constant flow of exciting video content for almost any interest and niche. The audience has noticed YouTube is on its way as an even bigger competitor to traditional TV.
“Internal data indicate that close to 45% of overall YouTube viewing in the U.S. today is happening on TV screens”.
BLOCKCHAIN
Blockchain Large Language Models
How to use blockchain as a tool for intrusion detection:
“This paper presents a dynamic, real-time approach to detecting anomalous blockchain transactions. The proposed tool, BlockGPT, generates tracing representations of blockchain activity and trains from scratch a large language model to act as a real-time Intrusion Detection System.”
The Crypto Trash Moat
A big question: To what extent are crypto platforms used for crime?
There can only be estimates: “Any conversation about crypto and crime needs to disclose that, according to the folks with the data, less than 1% of total crypto transactions can be tied to illicit use – at least that’s what Chainalysis reports.”
What is going on a lot in crypto and elsewhere are “confidence games”, where criminals lure people into paying for something (often as an investment) and then trick them out of their money.
In 2022 the US Department of Justice named cybersecurity expert Eun Young Choi as the first director of the “National Cryptocurrency Enforcement Team (NCET)”. The department seems to focus on “smaller issues” like social media scams, fraudsters and darknet misuse”.
Instead of major headline-drawing scandals like FTX and 3AC, Choi’s department seems primarily focused on relatively smaller issues like social media scammers, darknet misuse and online fraudsters – an activity that’s rarely discussed openly but which exists as a sort of background hum for anyone spending time on Crypto Twitter and Discord. (Paul Dylan-Ennis, a frequent contributor to CoinDesk, calls this crypto’s “trash moat …)”
The amounts of money the NCET and other US departments are securing are substantial:
While scams like these often only damage a single victim at a time, it can still be big money. NCET, along with other agencies, booked upwards of $112,000,000 from busting six such U.S.-based scams. The Federal Bureau of Investigation (FBI) estimates $3.31 billion was stolen from people in 2022 through investment fraud, with crypto-related scams accounting for more than a third (~$2.57 billion) of that figure. Worse than just money lost, the proliferation of confidence games – which require bad actors to cultivate long-term relationships and build trust with their marks – has tainted crypto’s reputation.
Thank you for reading. If you have questions or suggestions, please get in touch with us via info@trublo.eu.
0 Comments