Visualização normal

Antes de ontemStream principal
  • ✇Graham Cluley
  • Smashing Security podcast #466: Meta sees everything, Copy Fail, and a deepfake gets hired Graham Cluley
    Meta's smart glasses promise privacy "designed for you" - but everything they record was being beamed off to workers in Nairobi to label by hand. When those workers blew the whistle, Meta sacked all 1,108 of them. Meanwhile, the IT press is in a frenzy over a new Linux bug called "Copy Fail" - complete with logo, dedicated website, and a marketing-friendly name. But is it really the disaster everyone's making it out to be? And in our featured interview, Jake Moore of ESET explains how he t
     

Smashing Security podcast #466: Meta sees everything, Copy Fail, and a deepfake gets hired

6 de Maio de 2026, 20:30
Meta's smart glasses promise privacy "designed for you" - but everything they record was being beamed off to workers in Nairobi to label by hand. When those workers blew the whistle, Meta sacked all 1,108 of them. Meanwhile, the IT press is in a frenzy over a new Linux bug called "Copy Fail" - complete with logo, dedicated website, and a marketing-friendly name. But is it really the disaster everyone's making it out to be? And in our featured interview, Jake Moore of ESET explains how he tricked a company into offering his deepfake clone a job - after a perfectly normal-looking video interview. All this and more in episode 466 of the "Smashing Security" podcast with cybersecurity expert and keynote speaker Graham Cluley, joined this week by special guest Paul Ducklin.
  • ✇Open Source Intelligence Brasil
  • A Coreia do Sul e a Lei Básica de IA osintbrasil.blogspot.com
    Contexto GlobalEnquanto países como a União Europeia e os Estados Unidos ainda discutem frameworks regulatórios, a Coreia do Sul tornou-se pioneira mundial ao aprovar uma legislação específica para Inteligência Artificial. Essa decisão reflete a urgência em lidar com os impactos sociais e econômicos da IA, especialmente em um país altamente digitalizado e culturalmente influente.Estrutura da LeiA Lei Básica de IA organiza os sistemas em categorias de risco:Alto risco: aplicações críticas (como s
     

A Coreia do Sul e a Lei Básica de IA



Contexto Global

Enquanto países como a União Europeia e os Estados Unidos ainda discutem frameworks regulatórios, a Coreia do Sul tornou-se pioneira mundial ao aprovar uma legislação específica para Inteligência Artificial. Essa decisão reflete a urgência em lidar com os impactos sociais e econômicos da IA, especialmente em um país altamente digitalizado e culturalmente influente.

Estrutura da Lei

A Lei Básica de IA organiza os sistemas em categorias de risco:

  • Alto risco: aplicações críticas (como saúde, segurança pública e finanças) sujeitas a regras rígidas e fiscalização intensa.

  • Baixo risco: sistemas voltados a entretenimento ou uso cotidiano, com maior liberdade regulatória.

Além disso, a lei determina que todo conteúdo gerado por IA deve conter marcas d’água obrigatórias, garantindo rastreabilidade e transparência.

Objetivos Principais

  • Proteção contra crimes digitais: combate a deepfakes pornográficos, que se tornaram uma ameaça grave à privacidade dos cidadãos.

  • Defesa da indústria cultural: salvaguarda da imagem de artistas do K-Pop, frequentemente alvo de manipulações digitais.

  • Equilíbrio entre inovação e segurança: criação de um ambiente regulatório que não sufoca empresas como Samsung, LG e Naver, mas impõe limites claros para abusos.

Impacto e Significado

A Coreia do Sul posiciona-se como referência global em governança tecnológica, superando até a União Europeia em termos de implementação prática. O país demonstra que é possível conviver com a máquina por meio de regras claras, punições proporcionais e incentivo ao desenvolvimento seguro.

Essa legislação pode servir como modelo internacional, inspirando outros países a adotar medidas semelhantes para proteger cidadãos e indústrias sem frear o avanço tecnológico.

  • ✇Graham Cluley
  • The AI Fix #81: ChatGPT is the last AI you’ll understand, and your teacher is a deepfake Graham Cluley
    In episode 81 of The AI Fix, Graham discovers that deepfakes are already marking your kids' homework, while Mark glimpses the future when he discovers AI agents that can communicate by reading each other's minds. Also in this episode, a Chinese robot called Miro U proves six arms are better than two; Mark discovers a well known prompting technique doesn't work unless you want to make your AI dumber; Network Rail delays 32 trains because of an AI photo of a wonky bridge; and our hosts ponder t
     

The AI Fix #81: ChatGPT is the last AI you’ll understand, and your teacher is a deepfake

16 de Dezembro de 2025, 12:30
In episode 81 of The AI Fix, Graham discovers that deepfakes are already marking your kids' homework, while Mark glimpses the future when he discovers AI agents that can communicate by reading each other's minds. Also in this episode, a Chinese robot called Miro U proves six arms are better than two; Mark discovers a well known prompting technique doesn't work unless you want to make your AI dumber; Network Rail delays 32 trains because of an AI photo of a wonky bridge; and our hosts ponder the explosion of progress on the ARC-AGI-2 reasoning benchmark. All this and much more is discussed in the latest edition of "The AI Fix" podcast by Graham Cluley and Mark Stockley.
  • ✇Security Intelligence
  • Are successful deepfake scams more common than we realize? Jennifer Gregory
    Many times a day worldwide, a boss asks one of their team members to perform a task during a video call. But is the person assigning tasks actually who they say they are? Or is it a deepfake? Instead of blindly following orders, employees must now ask themselves if they are becoming a victims of fraud. Earlier this year, a finance worker found themselves talking on a video meeting with someone who looked and sounded just like their CFO. After the meeting was over, they then dutifully followed t
     

Are successful deepfake scams more common than we realize?

24 de Janeiro de 2025, 14:00

Many times a day worldwide, a boss asks one of their team members to perform a task during a video call. But is the person assigning tasks actually who they say they are? Or is it a deepfake? Instead of blindly following orders, employees must now ask themselves if they are becoming a victims of fraud.

Earlier this year, a finance worker found themselves talking on a video meeting with someone who looked and sounded just like their CFO. After the meeting was over, they then dutifully followed their boss’s instructions to send $200 million Hong Kong dollars, which equals $25 million.

But it wasn’t actually their boss — just an AI video representation called a deepfake. Later that day, the employee realized their terrible mistake after checking with the corporate offices of their multinational firm. They had been a victim of a deepfake scheme that defrauded the organization out of $25 million.

Businesses are often deepfake targets

The term deepfake refers to AI-created content — video, image, audio or text — that contains false or altered information, such as Taylor Swift promoting cookware and the infamous fake Tom Cruise. Even the recent hurricanes hitting the U.S. led to multiple deepfake images, including fake flooded Disney World photos and heartbreaking AI-generated pictures of people with their pets in floodwaters.

While deepfakes, also referred to as synthetic media, targeted at individuals typically serve to manipulate people, cyber criminals targeting businesses are looking for monetary gain. According to the CISA Contextualizing Deepfake Threats to Organizations information sheet, threats targeting businesses tend to fall into one of three categories: executive impersonation for brand manipulation, impersonation for financial gain or impersonation to gain access.

But the recent incident in Hong Kong wasn’t just one employee making a mistake. Deepfake schemes are becoming increasingly common for businesses. A recent Medus survey found that the majority (53%) of finance professionals have been targeted by attempted deepfake schemes. Even more concerning is the fact that more than 43% admitted to ultimately falling victim to the attack.

Watch Unmask the Deepfake

Are deepfake attacks underreported?

The key word from the Medus research is “admitted.”  And it raises a big question. Do people fail to report being a victim of a deepfake attack because they are embarrassed? The answer is probably.  After the fact, it seems obvious it was a fake to other people. And it’s tough to admit that you fell for an AI-generated image.  But the underreporting only adds to the shame and makes it easier for cyber criminals to get away with it.

Most people assume that they could spot a deepfake. But that’s not the case. The Center for Humans and Machines and CREED found a wide gap between people’s confidence in identifying a deepfake and their actual performance. Because many people overestimate their ability to identify a deepfake, it adds to the shame when someone falls victim, which likely leads to underreporting.

Why people fall for deepfake schemes

The employee who was tricked by the deepfake of the CFO to the tune of $25 million later admitted that when they first got the email supposedly from his CFO, the mention of a secret transaction made them wonder if the email was actually a phishing email. But once he got on the video, they recognized other members of his department in the video and decided it was authentic. However, the employee later learned that the video images of his department members were also deepfakes.

Many people who are victims overlook their concerns, questions and doubts. But what makes people, even those educated on deepfakes, push their concerns to the side and choose to believe an image is real? That’s the $1 million — or $25 million — question that we need to answer to prevent costly and damaging deepfake schemes in the future.

Sage Journals asked the question about who was more likely to fall for deepfakes and didn’t find any pattern around age or gender. However, older individuals may be more vulnerable to the scheme and have a hard time detecting it. Additionally, the researchers found that while awareness is a good starting point, it appears to have limited effectiveness in preventing people from falling for deepfakes.

However, computational neuroscientist Tijl Grootswagers of Western Sydney University likely hit the nail on the head as to the challenge of spotting a deepfake: it’s a brand new skill for each of us. We’ve learned to be skeptical of news stories and bias, but questioning the authenticity of an image we can see goes against our thought processes. Grootswagers told Science Magazine “In our lives, we never have to think about who is a real or a fake person. It’s not a task we’ve been trained on.”

Interestingly, Grootswagers discovered that our brains are better at detection without our intervention. He discovered that when people looked at a picture of a deepfake, the image resulted in a different electrical signal to the brain’s visual cortex than a legitimate image or video. When asked why, he wasn’t quite sure — maybe the signal never reached our consciousness due to interference from other brain regions, or maybe humans don’t recognize the signals that an image is fake because it’s a new task.

This means that each of us must begin to train our brain to consider that any image or video that we view could possibly be a deepfake. By asking this question each and every time we begin to act on content, we may be able to begin detecting our brain signals that are spotting the fakes before we can. And most importantly, if we do fall victim to a deepfake, especially at work, it’s key that each of us reports all instances. Only then can experts and authorities begin to curb the creation and proliferation.

The post Are successful deepfake scams more common than we realize? appeared first on Security Intelligence.

  • ✇Security Intelligence
  • 2024 trends: Were they accurate? Jennifer Gregory
    The new year always kicks off with a flood of prediction articles; then, 12 months later, our newsfeed is filled with wrap-up articles. But we are often left to wonder if experts got it right in January about how the year would unfold. As we close out 2024, let’s take a moment to go back and see if the crystal balls were working about how the year would play out in cybersecurity. Here are five trends that were often predicted for 2024. 1. The use of artificial intelligence in cybersecurity will
     

2024 trends: Were they accurate?

23 de Dezembro de 2024, 14:00

The new year always kicks off with a flood of prediction articles; then, 12 months later, our newsfeed is filled with wrap-up articles. But we are often left to wonder if experts got it right in January about how the year would unfold. As we close out 2024, let’s take a moment to go back and see if the crystal balls were working about how the year would play out in cybersecurity.

Here are five trends that were often predicted for 2024.

1. The use of artificial intelligence in cybersecurity will increase

As the year began, there was no doubt that artificial intelligence (AI) would be a main character in the year’s events — and that was right on the money. Many organizations began to use or continue using AI in their cybersecurity operations in a wide range of ways. For example, Microsoft’s internal response teams use a large language model to manage requests and tickets based on how they were handled previously, saving 20 hours per person each week.

As the world turned its attention over the summer to the Paris Olympics, the team responsible for keeping the Paris Olympics data, apps, systems and even physical buildings protected turned to AI. While 140 cyberattacks were linked to the Olympics, the teams’ efforts resulted in no disruption of the competitions.

Throughout the entire life cycle of the games, from before the opening ceremony to after the torch left Paris, cybersecurity teams used AI to secure critical information systems, protect sensitive data and raise awareness within the games’ ecosystem. Additionally, algorithmic video surveillance based in AI scanned video to detect abandoned bags, the presence of weapons, unusual crowd movements and fires.

2. Organizations will see more AI-based threats and attacks

Unfortunately, experts were right about cyber criminals also turning to AI technology to more effectively conduct attacks. Threat actors are using AI in a wide range of ways for data breaches and cyberattacks, including improved reconnaissance, better target profiling and lowering expertise required for conducting an attack. Because AI can automate many processes required for an attack, such as vulnerability scanning, exploitation and data exfiltration processes, more cyber criminals now have the skills for even more damaging attacks.

“Since the release of gen AI, attackers are increasingly employing tools along with large language models to carry out large-scale social engineering attacks, and Gartner predicts that by 2027, 17% of total cyberattacks/data leaks will involve generative AI,” wrote Gartner in an August 2024 press release.

IBM distinguished engineer Jeff Crume has no doubt that the trend of cyber criminals using AI for attacks will continue in 2025. He says that cyber professionals do a better job of authentication because attackers are finding it easier to log in than to hack in. While looking for bad grammar and spelling errors now works to spot phishing attacks, he expects that this will no longer work as AI-based phishing attacks hit mass distribution.

Explore cybersecurity services

3. An increase in deepfakes and deceptions

While experts correctly predicted that deepfakes would become more of a threat in 2024, it’s likely no one expected the scale of arguably the most shocking deepfake story of the year. At the beginning of 2024, attackers created a deepfake video call that led to an employee giving the cyber criminals $25 million, which showed the power and damage that deepfakes can cause. But the World Economic Forum expects that the trend will only increase, even declaring that over the next two years, AI-fueled disinformation will be the number one threat in the world.

Throughout the year, other deepfake incidents made headlines. Quantum AI, an AI company, was suspected by the Securities and Exchange Commission of using AI to generate deepfakes on social media to deceive the public that Elon Musk developed the company’s technology. Even the well-received Paris Olympics were not immune to deepfakes, with Russian Group Storm-1679 suspected of creating AI content to discredit the International Olympic Committee. As the year closed out, German citizens saw an increase in AI-based propaganda regarding the upcoming German elections in 2025, including text, images and video.

4. A growing impact of quantum computing on cybersecurity

Ray Harishankar, IBM Fellow, IBM Quantum Safe, predicted that in 2024, “harvest now, decrypt later” attacks would become more common. As the year moved forward, quantum computing became an increasingly top concern, especially the harvest-now attacks. In July, the Office of Management and Budget released the Report on Post-Quantum Cryptography, which urged organizations to prepare their systems and processes for advancements in quantum computing.

During the fall of 2024, the predictions of the quantum’s impact became even more urgent, as symmetric cryptography would be unsafe by 2029, with even asymmetric cryptography fully breakable by quantum technology by 2034.

“That does not mean, however, that the risks are five years away. The prospect of harvest-now, decrypt-later attacks is already a concern, making the post-quantum cryptography transition an urgent priority,” wrote Gartner.

 5. Recession of ransomware attacks

John Dwyer, former Head of Research at IBM X-Force, predicted we might face a ransomware recession as more companies pledged not to pay the ransom. While we wish we could declare this came true, the jury is still out, and likely, we won’t know for sure until all the data is collected from 2024.

However, Wired declared in the summer of 2024 that “ransomware showed no signs of slowing down in 2024 — despite increasing police crackdowns.” In December, Heather Wishart-Smith wrote in her Forbes article The Persistent Ransomware Threat: 2024 Trends and High-Profile Attacks about the increasing dual extortion technique of cyber criminals as an increasing trend in 2024.

All in all, the experts were largely on target with their 2024 predictions. And in the next few weeks, we will start the prediction game all over again as we wonder what’s in the cards for cybersecurity in 2025.

The post 2024 trends: Were they accurate? appeared first on Security Intelligence.

❌
❌