Skip to main content
Dark background with blue code overlay
Blog

A Log4j Retrospective Part 4: 5 Lessons Learned from Log4j

Charlie Gero

Written by

Charlie Gero

January 13, 2022

Charlie Gero is a VP & CTO of the Enterprise Division at Akamai as well as leads the Advanced Projects Group. He currently focuses on bleeding edge research in the areas of security, applied mathematics, cryptography, and distributed algorithms in order to build the next generation of technologies that will protect Akamai's growing customer base. Through his research at Akamai, he has secured nearly 30 patents in cryptography, compression, performant network systems, real time media distribution, and more, and has degrees in both Physics and Computer Science. He has been at Akamai for nearly 15 years, having prior founded a startup and served in key computer science positions in the pharmaceutical and networking industries.

In Part 4 of the Log4j retrospective series, I want to highlight the key takeaways. Many more lessons will be uncovered as the hunt to eradicate this vulnerability moves forward. However, there are already five fundamental takeaways.

1. The new norm

Both the complexity of software and the rate at which end users demand new features continue to grow rapidly and without bounds. In order to satisfy the needs of end users in the time frames required, developers must rely on a rapidly growing set of available libraries, language ecosystems, and third-party infrastructure and services. As a result, larger and larger portions of the functionality of any piece of software is composed of components the developers themselves may never have touched or understood fully.

In any software dependency graph, vulnerabilities are inherited from leaf nodes, or shared code and services, upward to the root node, or the product being programmed. As more and more of these leaf nodes are added to a project, as is necessary per the above, so too does the risk of a vulnerability increase.

This all leads to an inevitable conclusion: These types of vulnerabilities are not only here to stay, but will continue to expand in frequency and impact.

This is the new norm.

2. Risk is recursive

We often incorrectly think of risk with respect to the systems, software, and functions we can directly control. More advanced organizations are beginning to assess risk one level out; for example, by asking their developers to examine the trustworthiness of a given library.

But, as more and more systems and software continue to be composed upon layers and layers of third-party code, organizations will increasingly have to not only assess the risk of a given library or partner, but also the practices of that development community or vendor, to ensure they are examining their dependencies as well.

Every node in the dependency tree and supply chain should be assessed by you, your partners, and/or the respective development community to determine if tolerable risk levels are met.

3. Visibility unlocks speed

Even with the above risk assessments in place, vulnerabilities are going to occur. We must accept this fact. The question is how we can more effectively address the situation when it happens, not how we can prevent it altogether.

To that end, visibility is paramount. Many organizations struggle with patching because they don’t know what machines are affected in the first place. Enterprises must have systems in place that provide visibility into what is running in the data center and cloud.

The more comprehensive and accurate the visibility is, the faster an organization can react and patch necessary assets.

4. Filter out the obvious

Many vulnerabilities can only be attacked through a chain of exploits. Cutting off any piece of the chain is often enough to prevent full exploitation. As a result, systems that filter out both prior and obvious attacks are critical.

Organizations should prioritize the following systems:

  • Endpoint protection platforms (EPP)
Protect endpoints from known malicious software
  • Web application firewalls (WAF)

Protect web applications from known malicious payloads and threat actors — consider Akamai’s best-in-class Kona protection
  • DNS firewall

Protect endpoints from visiting malicious domains and filter out malicious DNS payloads — consider Akamai Enterprise Threat Protection solution
  • Secure web gateway (SWG)

Protect endpoints from downloading malware and visiting malicious sites on the internet — consider Akamai Enterprise Threat Protection solution
  • Multi-factor authentication (MFA)

Reduce the risk of stolen credentials allowing access into your enterprise where an exploit chain can be delivered — consider Akamai MFA
  • Identity-based segmentation

Restrict software and systems to communicating with only those machines necessary to complete their tasks — consider Akamai Guardicore Segmentation
  • Zero Trust Network Access (ZTNA)

Limit the impact of infected end users coming into the network — consider Akamai Enterprise Application Access

5. Least privilege reigns supreme

Finally, organizations should fully embrace the principle of least privilege. Lock down servers, machines, and software so that they may reach only the systems required to perform their tasks.

For example, many of the systems that are making outbound LDAP calls as part of the Log4j exploit never had a need to utilize LDAP. Such systems should have firewalled access to LDAP.  Another example: If a service only answers inbound requests, block outbound connections.

By applying the principles of least privilege to all systems and software in your control, you can greatly reduce the threat surface when a vulnerability arises, and in many cases, stop the attack chain before you are impacted.

Learn more

Thanks for making it to the end of this series with me. Although this blog series ends here, our research and protection of customers from vulnerabilities continues. Don’t hesitate to reach out to your Akamai contact if you’d like to learn more about our recommendations for mitigation from Log4j and other threats.



Charlie Gero

Written by

Charlie Gero

January 13, 2022

Charlie Gero is a VP & CTO of the Enterprise Division at Akamai as well as leads the Advanced Projects Group. He currently focuses on bleeding edge research in the areas of security, applied mathematics, cryptography, and distributed algorithms in order to build the next generation of technologies that will protect Akamai's growing customer base. Through his research at Akamai, he has secured nearly 30 patents in cryptography, compression, performant network systems, real time media distribution, and more, and has degrees in both Physics and Computer Science. He has been at Akamai for nearly 15 years, having prior founded a startup and served in key computer science positions in the pharmaceutical and networking industries.