A data breach is the most damaging risk an organization can face. Exposure of sensitive data, whether by mistake or by criminals, can result in a loss of competitive advantage and, in the case of personal information leaks, even fines, and reputational damage.
To avoid data breaches, businesses are investing in security measures to reduce their attack surface and minimize their vulnerability. While most service providers address the gaps in the security perimeter, an often-overlooked threat remains: hardcoded secrets embedded in source code.
Why are secrets-in-code an issue?
Secret keys, private keys, SSH keys, access/secret keys, third-party secrets, API keys, and other sensitive data should never be hardcoded in your application's source code.
Manufacturers and software development businesses often use the same hardcoded password across all apps (many of which require elevated rights to work) or devices made within a specific series, release, or model. If a hacker learns the default password, they may be able to access all identical devices or installations of the program. This exploit has resulted in large-scale cyberattacks, resulting in enormous security breaches and global disruptions, and even jeopardizing vital infrastructure.
An easy target for attackers
Secrets in source code are susceptible to password guessing attacks, allowing hackers and malware to take control of firmware, devices (such as health monitoring equipment), systems, and software.
Passwords are commonly included in code by developers and other IT employees for quick access. However, these passwords are occasionally forgotten and remain in plain text in the code.
Occasionally, the code is made public (on GitHub, for example), with the plain text password easily discoverable by anybody using publicly available scanning tools.
Affects third parties
Secrets in code pose a cyber threat to the device, firmware, application, or other components and other parts of the IT system. Unsuspecting third parties may also be harmed by this hardcoding neglect, as they may be susceptible to DDoS assaults from botnets of devices enslaved because of a hardcoded credential breach.
Poses a threat to the automation process
Secrets are frequently encoded in scripts or files in DevOps technologies, putting the entire automated process at risk.
Two high-profile major data breach cases involving secrets-in-code credentials
The Mirai malware
This virus searches the Telnet service on Linux-based IoT equipment running Busybox (such as DVRs and WebIP Cameras) and unattended Linux servers. It gained notoriety in late 2016.
It uses a brute force attack to log in using a table of 61 known hardcoded default users and passwords. Mirai and its variations were used to build massive botnets of IoT devices, with up to 400,000 linked devices, most unknown to their owners.
The Uber data hack
The Uber data hack exposed the personal information of 57 million consumers and 600,000 drivers. Again, source-in-code credentials were to blame. A Uber employee included unencrypted credentials in the source code, later shared on GitHub.
The encoded credentials were located on GitHub and utilized to gain privileged access to Uber's Amazon AWS Instances by a hacker.
Credentials should not be hard-coded since extracting strings from an application's source code or binaries is easier. Usernames and passwords for a human-machine interface, workstation, or password-protected equipment are frequently glued to the device, scribbled on paper, or printed on the component cabinet with an industrial-grade ink pen. It is a terrible practice, but it also paves the path for a disruptive external or insider assault on systems and configuration files.