eCommerce

With bot management to success in the Christmas business

They used to be called parsers, crawlers or robots. Bots is the abbreviation used today for the small applications that automatically carry out web requests or actions in forms. In 2020 they were already producing over 40 percent of internet traffic. Good bots browse the web on behalf of search engines. They feed Google, Bing and the like and help customers find your shop and website.

But evil bots want to spy, steal, cheat and deceive. Cyber ​​criminals use automated versions of the little helper programs to access data on websites and at the interfaces (API) between web applications. Or they abuse application logics such as registration, authentication or payment processes in order to hijack identities or manipulate the flow of goods or payments. In the meantime, according to analyzes by various experts 40 percent of all login attempts from malicious bots. Cybersecurity Ventures, a world leader in research into the global cyber economy, has calculated that the global cost of cybercrime will increase by 15 percent per year over the next five years. By 2025, they are expected to cause losses of US $ 10.5 billion per year. Compared to 2015, when it was only $ 3 billion, that’s an increase of 250 percent. However, these costs do not yet include the loss of reputation that can quickly cost a successful web shop its entire existence.

The security teams of web shops could of course prevent all bot traffic with their tools. But with that they cut their own flesh. Because the good helpers, for example from search engines, for performance measurement and price comparison portals, direct customers to their own shops.

The only alternative is therefore to monitor the bot traffic continuously and in real time and to recognize the intention of the helper program with advanced tools. Because malicious bots differ from their good cousins ​​in their behavior. The bad helpers carry out automated inquiries at short intervals with different content and generate abnormalities. They flood web shops and their API and, above all, login forms with try & error requests. You try to get access with different login data until it succeeds.

Almost finished!

Please click on the link in the confirmation email to complete your registration.

Would you like more information about the newsletter? Find out more now

Four malicious bot intentions are hidden under the terms content scraping, account takeover, SQL injections and API abuse. With content scraping, the bots steal content, such as product descriptions and images, in order to use them on fake pages. Unlike search engine bots, they don’t use user agent strings like robot.txt or googlebot. Stolen content can lead to the downgrade of your own web shop in the Google ranking and thus ruin the Christmas business. However, the three other bot methods that cause immediate and lasting damage to your webshop or your customers are even more dangerous.



Account takeover and identity theft through credential stuffing

Cybercriminals can now buy hundreds of millions of current account details with email addresses, usernames and passwords on the Darknet. With these credentials, bots are set on login and authentication forms from thousands of websites. This process is called credential stuffing. And they often succeed with that.

Because many Internet users use the same access data for different web applications. The bots then lock out the legitimate owners by simply changing the passwords. You intercept the new password and can now hijack the account. With the account information found, such as payment methods and personal details, they can take over the customer’s identity and cause great damage.

Security teams in a web shop should therefore permanently monitor authentication events such as logins, account settings and password resets. If accounts and identities are then hijacked, the attackers will be noticed by unusual or frequent changes in the settings. This includes: changes of address in order to redirect ordered goods, as well as changes to e-mail addresses. As the frequency of such actions increases, security teams should freeze these accounts until the user’s identity can be reconfirmed.



SQL injection (SQLI) and API abuse look for gaps in programming

With SQLI, bots usually perform automated scans on forms and API over a longer period of time in order to detect security gaps in SQL programming. If the bot has found a loophole, it injects database commands into the application in order to read the database, record the traffic, change data or gain control over the database.

API abuse works in a similar way. Here the bots first probe the traffic of a publicly accessible API in order to intercept personal data or credit card information. To do this, they often falsify the header functions which, for example, are intended to identify the original IP address of a user. The market researchers from Gartner estimate that API abuse will be the most common type of attack on web applications by 2022. Because APIs are indispensable for modern web and cloud applications, without which a web shop can no longer be operated. In the normal workflow, they transfer data for financial transactions, inventory and price information between a large number of systems. The security teams should monitor their integrity permanently and in real time.



Transparency across all web traffic

Detecting and blocking cybercriminal activity immediately is a key requirement for security teams and the tools they use if they are to effectively stop hackers. Because especially in the Christmas business, web shops cannot afford service interruptions, data leaks and account lockouts.

To prevent identity theft and API abuse, absolute transparency is required as to where and how cyber criminals manipulate the applications. In order to gain such insights in real time, many security teams use tools with which they can monitor the current activities in their applications and all users. This enables them to analyze the context of web requests. They check certain attributes in HTTP request headers and responses or detect an unusual accumulation of IP addresses from abroad. Some security teams use self-learning AI algorithms that can distinguish malicious bots from authentic users and filter them out. Based on analyzes of legitimate traffic, security teams can define parameters for how bot-generated requests should be identified and when the alarm should be raised. This enables them to recognize and prevent automated login attempts.

With such a bot traffic management the fight against cyber criminals succeeds without affecting the desired traffic of good bots and of course the web shop customers.

You might be interested in that too

Reference-t3n.de

Leave a Reply

Your email address will not be published. Required fields are marked *