Skip to content

How Georgia-Pacific used generative AI and AWS Countdown Premium to improve manufacturing operations KPIs

15 minute read
Content level: Advanced
0

Learn how Georgia-Pacific partnered with AWS to leverage generative AI (gen AI) and enhance operational performance and productivity across its manufacturing network.

Introduction

Gen AI offers the potential for analyzing structured and unstructured data in manufacturing operations. Recognizing this potential, Georgia-Pacific and AWS partnered to explore how they might implement gen AI solutions in Georgia-Pacific's plants. The journey presented challenges, including data integration complexities and concerns about AI reliability in critical operations. The teams collaborated closely, leveraging the architectural framework in AWS to address these issues. They focused on data security, scalability, and real-time capabilities while working through the intricacies of adapting AI to manufacturing contexts. This ongoing process involves careful testing and iteration to balance the promise of AI with the practical realities of industrial applications. Both companies acknowledge that while progress is being made, fully realizing the potential of gen AI in manufacturing requires continued effort and refinement.

At the heart of this transformation is the AWS Countdown Premium support offering. AWS Countdown Premium is a comprehensive solution that's designed to enhance operational efficiency and boost productivity. With this solution, Georgia-Pacific scaled its gen AI capabilities across its manufacturing network, and achieved significant improvements in operational performance and overall productivity. In the following sections, we explore how this partnership used AWS Countdown Premium support to revolutionize Georgia-Pacific's manufacturing processes.

About Georgia-Pacific

Georgia-Pacific is an American pulp and paper company based in Atlanta, Georgia. It's one of the world's largest manufacturer and distributor of packaging and good, such as paper, tissue, and paper towels to support your daily routine. Georgia-Pacific is also a leading supplier of building products for many do-it-yourself warehouses.

Georgia-Pacific has more than 30,000 employees globally, primarily in North America, the United States, and Canada. They have approximately 150 factories and mills at different locations, and invest approximately 90% of earnings back into the businesses.

About AWS Countdown Premium

AWS Countdown Premium is an engineering-led offering that supports customers throughout their event lifecycle:

  • Initial architecture

  • Planning

  • Building proof-of-concept and testing

  • Production

AWS Countdown Premium accelerates migration and modernization processes, improving the return on infrastructure investments and facilitating successful go-live events. During the design and implementation of gen AI solutions, organizations often face complex technical challenges. To address these challenges, AWS Countdown Premium provides access to specialist engineers who offer context-specific guidance and troubleshooting expertise. This service also assists in critical tooling decisions and provides prescriptive guidance for developing essential artifacts, runbooks, and decision templates. AWS Countdown Premium also focuses on key areas that contribute to scalability, including performance efficiency, security measures, operation excellence, and system reliability. With this comprehensive support, organizations can navigate the intricacies of AI implementation more effectively to create a smoother transition and optimized outcomes.

The challenges of managing gen AI deployments at scale

Enter image description here

As organizations move beyond proof-of-concept to full-scale deployment of gen AI solutions, they face several significant challenges that aren't apparent during the initial phase.

Integration with existing infrastructure

One of the primary hurdles is the ability to integrate a solution with an existing infrastructure. While prototypes can effectively function in isolated sandboxes, production deployments require careful consideration of data integration, system compatibility, and network architecture.

Monitoring and observability

The ability to monitor and observe gen AI solutions presents another substantial challenge, especially when managing a distributed AWS infrastructure across multiple AWS Accounts and AWS Regions. Organizations must develop sophisticated methods to track resources throughout their distributed environment. They must also implement centralized alarms and dashboards that provide comprehensive visibility through a single interface.

Security

Security considerations become exponentially more complex at scale. Teams must continuously identify and address potential vulnerabilities and threat vectors across the entire infrastructure. Because it's an ongoing process to maintain robust security, security operations teams must be constantly vigilant.

Performance management

Performance management also presents unique challenges in production environments. The limitations of proof-of-concept testing become apparent when applications face real-world traffic and workloads. Unlike controlled testing environments, production deployments must handle actual stress conditions.

Cost management

With the unexpected expenses that can come with an inadequate FinOps strategy, many organizations struggle with cost management. Common issues include underutilized resources across compute, storage, and database services, inappropriate hardware selections, and overprovisioning of resources. Without a robust cost optimization strategy, organizations often face significant overhead that they can avoid with proper planning and resource management.

The Georgia-Pacific and AWS partnership

Several pressing challenges that the manufacturing industry faces led to Georgia-Pacific's decision to partner with AWS for gen AI services. At the forefront is a growing labor crisis. Companies are experiencing a significant workforce transition as experienced personnel approach retirement age. At the same time, companies are having difficulty attracting younger generations to work in their remote factory and mill locations. This demographic shift has created an urgent need to preserve and transfer decades of invaluable institutional knowledge.

Safety considerations also played a crucial role in this partnership decision. The inspection of massive paper machines, roughly the size of football fields, poses significant risks to workers. By implementing AWS gen AI solutions, Georgia-Pacific aims to automate these hazardous inspection tasks and enhance workplace safety while maintaining operational efficiency.

The company also recognized the potential of gen AI to address critical operational challenges. Through the advanced capabilities of AWS, Georgia-Pacific seeks to improve its Overall Equipment Effectiveness (OEE) and reduce unplanned downtime, goals that affect the bottom line. Most importantly, the partnership created sophisticated knowledge transfer systems that can capture and share the expertise of retiring workers with newer employees. These systems help bridge the growing experience gap in the workforce. This strategic collaboration with AWS doesn't just help Georgia-Pacific solve immediate operational challenges, but also positions the company for a future where AI-driven solutions play a crucial role in manufacturing excellence.

Considerations for scaling gen AI applications

Scaling gen AI introduces unique complexities beyond traditional machine learning (ML). While built on statistical ML and deep learning foundations, gen AI represents a paradigm shift that requires new expertise. In the partnership between the two organizations, the team determined that there were four areas that needed additional consideration.

Data preparation

Data preparation is the foundational step for ML and gen AI applications, and encompasses several crucial components. Initially, teams use Amazon Simple Storage Service (Amazon S3) services to create a unified data management system. These services create a data lake that centralizes both structured and unstructured data. Then, teams use services such as Amazon SageMaker Data Wrangler to address data quality and context. These services implement robust data quality standards and source contextual data to enhance large language model (LLM) performance.

The next phase involves ground truth data labeling, where Amazon services automate the process to maintain consistency and minimize human bias. Then, data is validated before the services scale the output evaluation. Lastly, Retrieval Augmented Generation (RAG) implementation enriches the LLM context and helps identify appropriate embeddings for specific use cases. Selecting the right vector database is crucial. You must consider several factors that are essential for gen AI workloads and dataset preparation, such as pricing and security features such as role-based access controls (RBAC).

Foundational models, LLMs, and gen AI operations

The operational framework for foundational models, LLMs, and gen AI operations requires coordinated management of multiple components for reliable, scalable implementation. The process starts with continuous deployment through ML pipelines, which facilitates consistent model availability across AWS infrastructure. Comprehensive monitoring systems track model and feature drift, and start automated ML operations (MLOps) pipelines, such as Amazon SageMaker, to retrain when necessary.

Prompt engineering and management are crucial, and organizations must maintain systematic approaches to prompt development and storage. These approaches include tools such as Amazon Bedrock Prompt Management that create structured prompt libraries, with continuous evolution based on feedback and performance metrics.

The implementation of robust systems to monitor and observe provides real-time performance insights that track API throttling, response latencies, and other important metrics. Early implementation of these systems, combined with automated alerts, facilitate rapid response to issues. These components must work together seamlessly to maintain reliability and consistency across implementations and effectively scale gen AI systems.

Security and governance

Because of the unique threat vectors of gen AI applications, security and governance are essential components in your solution. A comprehensive security framework begins with the established guardrails and robust identity and access management policies. These tools protect against training data poisoning and prompt injection. To help prevent harmful prompt injections, you must explicitly include security prompts in LLM instructions. Protection against model DDoS attacks is crucial, as bad actors can overwhelm public-facing LLMs with excessive input data or high-volume queuing. These precautions require thorough input data validation, sanitization, and zero-trust data access policies. You can use services such as Amazon Bedrock Guardrail to enforce data retention policies and manage your model governance. Lastly, output validation is critical before you send LLM responses to downstream applications. Malicious instructions can potentially compromise systems or expose sensitive data, such as Personally Identifiable Information (PII). These security measures must work together to create a robust defense against various threat vectors, while maintaining system integrity and data protection.

Cost efficiency and optimization

Cost efficiency and optimization are crucial aspects of managing gen AI operations, and focus primarily on model management and token size control. For model management, services such as AWS Budgets provide essential financial controls to help prevent overspending. To fully understand both implementation and inference costs, it's crucial to work with AWS Support teams before you scale operations. To manage costs, organizations can experiment with smaller models because these models use reduced token sizes. Token size management requires effective prompt engineering to create high-quality, efficient prompts that minimize token usage. Well-crafted prompts can effectively manage tokens per second (TPS) and directly influence overall operational costs. This dual approach of careful model selection and efficient prompt engineering creates a balanced strategy for implementing cost-effective gen AI operations while maintaining optimal performance.

Georgia-Pacific's gen AI solution

Georgia-Pacific partnered with AWS to successfully deploy a gen AI-powered Operator Assistant chatbot, from the proof-of-concept stage to production. This solution consisted of four phases:

  1. Experimenting a solution.

  2. Improving data management, generative AI operations (AIOPs), and resiliency of the solution.

  3. Addressing the gap in data.

  4. Optimizing the solution.

Experimenting a solution

In phase I, Georgia-Pacific's initial solution had a few issues:

  • The organization implemented embedding prompts and logic in a programming language without parameterization or hardcoding.

  • They initially used a non-standard database, which can lead to scaling problems.

  • They didn't have robust support from operational teams with the correct skill sets.

In this phase, Georgia-Pacific implemented Amazon Kendra indexes for every use case. However, the organization didn't use Amazon Kendra indexes efficiently, and created a new Amazon Kendra index each time. The organization also implemented custom pipelines with little CI/CD and source safe version control, but didn't include any failover.

Enter image description here

In this phase, every surge in demand required re-engineering work, and the team even had to change the UI. The phase versions of UI were Python and String Lit, which weren't part of the team's standard skills.

Improving data management, generative AIOPs, and resiliency

Georgia-Pacific quickly moved to the next phase to address the issues that they found during the initial experiment. The focus of the organization was to improve their data management, generative AIOps, and resiliency. To make these changes quickly, Georgia-Pacific continuously evaluated, observed, monitored, and reviewed their findings with their teams and AWS.

The first change that the team implemented was parameterization of code and the migration to a standard PostgreSQL database. To store the intent, context, parameters, and prompts, Georgia-Pacific realized that they needed a standard database. They turned on CI/CD and DevOps practices, and then added containerization, failover, and security and governance policies.

During this phase, Georgia-Pacific discovered that to train their solution, they needed to find the right data. For the solution to work at the level that the team needed, they knew that the solution would be only as good as the data used to train it.

Addressing the gap in data

Because Georgia-Pacific is a 100-year-old company with machines and facilities that have changed over time, it was a challenge to find the right data. To get valid and current data, the team had to directly work with operators and maintenance engineers.

To solve this data issue, AWS used Amazon services to create a tool called DocGen. DocGen generates documents that translate the human experience into consumable documents, which the LLMs could then ingest. Georgia-Pacific used a chat interface and a voice interface to interview these operators and engineers. These knowledgeable subject matter experts (SMEs) were asked questions in areas where Georgia-Pacific didn't have enough information or the right documentation. Then, DocGen prepared the standard documentation and fed the documentation to the Operator Assistant LLMs.

Enter image description here

Optimizing the solution

Enter image description here

Georgia-Pacific is currently optimizing the solution for cost, support, and observability. The team stored parameters in databases, which are then standardized. In this phase, Amazon Kendra indexes are effectively used and optimized for cost. The team implemented a custom in-house Python library for code reuse for generative AIOps and MLOps that improves observability. AWS Countdown Premium helped Georgia-Pacific's chatbot move from the experimentation and Minimum Viable Product (MVP) stages to a more reliable, scalable stage.

How AWS Countdown helped

AWS Countdown Premium engaged the right engineers, industry experts, and technical experts from day 1 of the customer journey. These experts dove deeper to develop proactive technical guidance and engage with Georgia-Pacific for architecture reviews. After the initial architecture was built, the AWS Countdown Premium team expanded and provided an in-house skillset that included training and enablement. AWS Countdown Premium mitigated risks and tested loads to check for critical issues and bottlenecks. Lastly, the team used internal experts and services to help with the go-live event and provide continuous monitoring to identify issues. AWS Countdown Premium also provided prompt support to customers by continuously setting up tools and context-aware support.

Results

In just 8 months, Georgia-Pacific achieved significant improvements across multiple operational dimensions. They successfully deployed 20 use cases across 6 different sites, and showcased impressive speed and efficiency in their rollout strategy. The effect on productivity has been particularly noteworthy, with 500 users actively adopting the new technologies and weekly usage showing a consistent 10% monthly growth rate.

One of the most tangible benefits has been the substantial reduction in waste across operations. The implementation has also accelerated the model evaluation process, allowing for quicker iterations and improvements. Through the implementation of Amazon Bedrock Guardrails, Georgia-Pacific enhanced its monitoring and observability capabilities, providing better insights into system performance and usage patterns. Furthermore, the company achieved significant cost optimizations, particularly in the areas of vector databases and agent design. This optimization demonstrates that efficiency improvements don't need to come at the expense of increased operational costs.

This comprehensive transformation showcases how strategic implementation of gen AI can deliver measurable improvements across multiple facets of industrial operations.

Enter image description here

Georgia-Pacific's implementation of gen AI solutions with AWS yielded valuable lessons for organizations that are pursuing digital transformation. Their experience demonstrates that quality data collection is crucial for successful gen AI adoption, while a long-term architectural vision is essential for managing rapid scaling effectively.

The company's success stems from their focus on concrete business objectives rather than technological hype, facilitating meaningful returns on investment. Early customer engagement and emphasis on long-term supportability were key factors in their success. Their strong partnership with AWS Cloud support teams proved vital in overcoming technical challenges and optimizing their gen AI solutions. These insights now serve as a blueprint for other organizations that are embarking on similar digital transformation journeys.

About the authors

Enter image description here

Manish Sinha

Manish is the IT Vice President of Advanced Analytics at Georgia-Pacific focused on AI architecture and delivery. He has 32 years of experience in building analytics products, advanced analytics, and data engineering.

Enter image description here

Neel Sendas

Neel Sendas is a Principal Technical Account Manager at AWS. Neel works with enterprise customers to design, deploy, and scale cloud applications to achieve their business goals. He is also an ML enthusiast, and has worked on various ML use cases for manufacturing and logistics industries.

Enter image description here

Amit Alampally

Amit Alampally is a Principal Product Manager at AWS. Amit focuses on building software products to accelerate customer migrations and modernization.