Electronic crimes is a large company in Asia, and artificial intelligence can be about to make things worse

Electronic crimes is a large company in Asia, and artificial intelligence can be about to make things worse

Southeast Asia has become a global center of cyber fraud, with high -tech fraud with human trafficking. In countries such as Cambodia and Myanmar, criminal unions run “pig slaughtering” operations across the industrial-Scam centers that work by workers who have been smuggled into deception in the wealthiest markets such as Singapore and Hong Kong.

The scale is amazing: One of the United Nations appreciates the global losses of these plans In $ 37 billion. It can get worse soon.

The rise of Internet crimes in the region has an impact on politics and politics. Thailand reported a decrease in Chinese visitors this year, after that A Chinese actor was kidnapped I was forced to work in a fraud in Myanmar; Bangkok is Struggle now To persuade tourists, it is safe to come. Singapore has just issued a Scam’s law that allows law enforcement to freeze bank accounts of fraud victims.

But why did Asia become notorious for Internet crimes? Bin Goddman, General Manager of OKTA in Asia Pacific, notes that the region provides some unique dynamics that make online fraud easier. For example, the region is a “first mobile market”: the famous mobile messaging platforms such as WhatsApp, Line and WeChat are facilitating direct contact between the deception and the victim.

Artificial intelligence also helps fraudsters to overcome Asia’s linguistic diversity. Goodman notes that the automatic translations, while “the tremendous use of the aerobic organization” makes it easy for people to be stabbed to click on the wrong links or agree to something. “

National countries are also involved. Godman also indicates allegations that North Korea is using fake employees in major technology companies to collect intelligence and obtain money that is intensified in the isolated country.

New danger: “shade” artificial intelligence

Godman is concerned about a new risk about artificial intelligence in the workplace: “shadow” artificial intelligence, or employees who use special accounts to reach artificial intelligence models without the company’s supervision. “This may be a person preparing a presentation of business review, going to Chatgpt on his personal account, and creating a picture,” explains.

This can lead to non -perceived employees with secret information on the AI’s public platform, creating “a lot of risks in terms of information leakage.”

Ben-Goodman-SVP-and-GM-APAC-Okta-1-e1752223944559 Electronic crimes is a large company in Asia, and artificial intelligence can be about to make things worse

As a matter of courtesy Octa

Artificial intelligence agent can also obliterate the boundaries between personal and professional identities: for example, something related to your personal mail instead of your company. “As a company user, my company offers me a request to use it, and they want to judge how I use it,” explains.

But “I never use my personal profile to serve companies, and never use my company profile for personal service.” “The ability to determine who you are, whether at work, use of work services or life and the use of your personal services, is how we think about the customer’s identity in exchange for the company’s identity.”

For Judman, this is where things become complicated. Artificial intelligence agents are enabled to make decisions on behalf of the user – which means that it is important to determine whether the user behaves in a personal or in the company.

Godmann warns: “If your human identity is stolen at all, the diameter of the explosion is in terms of what can be done quickly to steal money from you or destroy your reputation much larger.”

Share this content:

Post Comment