Loading…
Attending this event?
In-person
11-12 December
Learn More and Register to Attend

The Sched app allows you to build your schedule but is not a substitute for your event registration. You must be registered for KubeCon + CloudNativeCon India 2024 to participate in the sessions. If you have not registered but would like to join us, please go to the event registration page to purchase a registration.

Please note: This schedule is automatically displayed in India Standard Time (UTC+5:30)To see the schedule in your preferred timezone, please select from the drop-down menu to the right, above "Filter by Date." The schedule is subject to change and session seating is available on a first-come, first-served basis. 
Room 1 clear filter
arrow_back View All Dates
Wednesday, December 11
 

9:30am IST

Keynotes to be Announced
Wednesday December 11, 2024 9:30am - 11:00am IST
Wednesday December 11, 2024 9:30am - 11:00am IST
Room 1

11:00am IST

Coffee Break ☕
Wednesday December 11, 2024 11:00am - 11:30am IST
Wednesday December 11, 2024 11:00am - 11:30am IST
Room 1

11:30am IST

Welcome | CNCF Project Lightning Talks - Jorge Castro, CNCF
Wednesday December 11, 2024 11:30am - 11:33am IST
Wednesday December 11, 2024 11:30am - 11:33am IST
Room 1

11:35am IST

LitmusChaos Evolution: The Latest Innovations and Security Enhancements To Chaos Engineering | Project Lightning Talk
Wednesday December 11, 2024 11:35am - 11:40am IST
A lot has transformed in the chaos engineering ecosystem with the large scale adoption the practice has witnessed in the recent years. As the LitmusChaos project matured with a robust UI and varied usecases, the rise in community growth is a story in itself. This lightning talk will showcase the project's latest updates, including new features that enhance user experience and making the platform more secure. The other highlight includes the results of a recent third party security audit sponsored by the OSTIF and conducted by 7A Security which helped strengthen the platform's security posture.
Additionally, I'll discuss LitmusChaos' active participation in LFX mentorship programs, mentorship program in South Korea fostering contributions from mentees and growing the community.
Wednesday December 11, 2024 11:35am - 11:40am IST
Room 1

11:42am IST

Pick Cilium! Lessons Learned From Writing 20+ Cloud Native Case Studies about Cilium | Project Lightning Talk
Wednesday December 11, 2024 11:42am - 11:47am IST
Bill has interviewed over 20 companies in industries ranging from media to financial services about why they picked Cilium for their cloud native platform. In this talk, he will reveal what end users truly want when adopting cloud native technologies and what the forcing function was for each of them to choose Cilium.

You’ll hear firsthand accounts of the triumphs and tribulations faced by companies like Bloomberg, DigitalOcean, The New York Times, and more as well as the specific benefits these organizations are reaping, from enhanced security and observability to improved performance and cost savings.

By the end, the audience will understand the real-world applications and advantages of Cilium and why end users chose it.
Wednesday December 11, 2024 11:42am - 11:47am IST
Room 1

11:49am IST

Karmada: Project Introduction and Updates | Project Lightning Talk
Wednesday December 11, 2024 11:49am - 11:54am IST
Karmada, a CNCF incubating project, aims to offer a unified control plane for seamless deployment and management across diverse cloud environments.

In this lightning talk, the following topic will be covered:

- Briefly introduction of Karmada
- Core Capabilities
- Key Use Cases
- Community updates
Wednesday December 11, 2024 11:49am - 11:54am IST
Room 1

11:56am IST

Introduction to Kyverno, the Cloud Native Policy Engine | Project Lightning Talk
Wednesday December 11, 2024 11:56am - 12:01pm IST
Kyverno is often described as a "Swiss Army kinfe" due to its many capabilities. In this quick tour, learn about its capabilities and how you can use Kyverno to improve security, compliance, and streamline operations across your clusters.
Wednesday December 11, 2024 11:56am - 12:01pm IST
Room 1

12:03pm IST

Contributing to Submariner - How to Get Started | Project Lightning Talk
Wednesday December 11, 2024 12:03pm - 12:08pm IST
A short talk on how developers can contribute to Submariner, areas needing contribution and help provided to new contributers
Wednesday December 11, 2024 12:03pm - 12:08pm IST
Room 1

12:10pm IST

Wasm Powered Open source LLMs and AI Agent | Project Lightning Talk
Wednesday December 11, 2024 12:10pm - 12:15pm IST
As AI and machine learning continue to shape modern enterprises, the need for fast, scalable, and secure deployment of AI models across server, edge devices with different underlying infra is critical. In this lightning talk, I will introduce how WasmEdge, a lightweight WebAssembly runtime, is powering open-source LLMs (Large Language Models) and AI agents for real-time, decentralized inference with portability and efficiency. WasmEdge enables developers to deploy AI models across diverse hardware environments with minimal overhead.

We’ll explore how WasmEdge’s newest features of function calling, text to speech, text to image, video recognition, along with its cross-platform compatibility and secure sandboxing make it ideal for running LLMs and AI agents in industries. You can also add any AI agents (like cursor) alongside your self hosted open source LLMs
Wednesday December 11, 2024 12:10pm - 12:15pm IST
Room 1

12:17pm IST

What's happening with CAPIBM | SIG Lightning Talk
Wednesday December 11, 2024 12:17pm - 12:22pm IST
This presentation will start with CAPIBM introduction and proceed with discussing the CAPIBM releases and new features. We will also discuss major milestones followed by the roadmap, and other critical information attendees should know about the project. Join the session to learn how to get involved!
Wednesday December 11, 2024 12:17pm - 12:22pm IST
Room 1

12:24pm IST

Intro to the CNCF App Development Working Group | WG Lightning Talk
Wednesday December 11, 2024 12:24pm - 12:29pm IST
We are excited to introduce the newly formed Application Development Working Group (WG) under the Technical Advisory Group (TAG) for App Delivery within the Cloud Native Computing Foundation (CNCF). Co-chaired by Mauricio Salatino, Daniel Oh, and Thomas Vitale, this WG aims to foster the growth of developers in the cloud native space. The Application Development WG was established to address the increasing need for specialized focus on cloud native application development. Our mission is to engage and support developers transitioning to or enhancing their practices within the cloud native ecosystem. This initiative is integral to the CNCF’s broader strategy of promoting cloud native technologies and fostering a vibrant, collaborative community.
Wednesday December 11, 2024 12:24pm - 12:29pm IST
Room 1

12:31pm IST

Taking a Quick Look at New Features in Argo CD v2.12 | Project Lightning Talk
Wednesday December 11, 2024 12:31pm - 12:36pm IST
In complex CI/CD environments, managing and previewing applications efficiently can be a challenge, especially when dealing with large repositories, multi-source applications, or performance bottlenecks. Argo CD version 2.12 addresses these issues by introducing several key features and improvements.

This lightning talk will cover the latest additions in Argo CD 2.12, including new commands for easier application set previews, fixes for mono-repo sync issues, and performance enhancements for large applications. If you're looking forward to getting quick updates about the new release features and bug fixes around Argo CD, you should definitely attend this talk.
Wednesday December 11, 2024 12:31pm - 12:36pm IST
Room 1

12:38pm IST

Buildpacks : Quietly Redefining the Build Experience | Project Lightning Talk
Wednesday December 11, 2024 12:38pm - 12:43pm IST
Cloud Native Buildpacks have emerged as an elegant way to create containers in the cloud native space. They are of utility to platform operators in designing automation for several verticals that aim to utilize cloud native tech.
In this talk, I intend to present the most recent areas of focus for the Buildpacks community. These include ARM64 migration, focus on security primitive such as SBOMs, and other optimisation to Buildpacks workflows.
Wednesday December 11, 2024 12:38pm - 12:43pm IST
Room 1

12:45pm IST

The gRPC Well Known Protos | Project Lightning Talk
Wednesday December 11, 2024 12:45pm - 12:50pm IST
gRPC has found widespread adoption in organizations around the world. You've probably written a protobuf yourself to define your own API. But did you know that the gRPC project actually defines several standard gRPC services that are generally applicable. In this talk, you will learn about gRPC's reflection, health, channelz, and status protos and how you can use them to get more out of your gRPC-based system.
Wednesday December 11, 2024 12:45pm - 12:50pm IST
Room 1

12:52pm IST

Keptn: Supercharge Your Deployments! | Project Lightning Talk
Wednesday December 11, 2024 12:52pm - 12:57pm IST
Discover how Keptn automates your deployment checks, improves observability, and reduces the complexity in your Application Lifecycle workflows.
This talk will introduce the Keptn project, highlighting its core features such as pre/post-deployment tasks/checks, DORA metrics for GitOps tooling, metrics collection from different observability platforms, and much more!
Learn some real-world use cases of Keptn as well!
Wednesday December 11, 2024 12:52pm - 12:57pm IST
Room 1

2:55pm IST

Multi-Node Finetuning LLMs on Kubernetes: A Practitioner’s Guide - Ashish Kamra & Boaz Ben Shabat, Red Hat
Wednesday December 11, 2024 2:55pm - 3:30pm IST
Large Language Model (LLM) finetuning on enterprise private data has emerged as an important strategy for enhancing model performance on specific downstream tasks. This process however demands substantial compute resources, and presents some unique challenges in Kubernetes environments. This session offers a practical, step-by-step guide to implementing multi-node LLM finetuning on Kubernetes clusters with GPUs, utilizing PyTorch FSDP and the Kubeflow training operator. We'll cover - preparing a Kubernetes cluster for LLM finetuning, optimizing cluster, system, and network configurations, and comparing performance of various network topologies including pod networking, secondary networks, and GPU Direct RDMA over ethernet for peak performance. By the end of this session, the audience will have a comprehensive understanding of the intricacies involved in multi-node LLM finetuning on Kubernetes empowering them to introduce the same in their own production Kubernetes environments.
Speakers
avatar for ASHISH KAMRA

ASHISH KAMRA

Senior Manager, Red Hat
Dr. Ashish Kamra is an accomplished engineering leader with over 15 years of experience managing high-performing teams in AI, machine learning, and cloud computing. He joined Red Hat in March 2017, where he currently serves as the Senior Manager of AI Performance at Red Hat. In this... Read More →
avatar for Boaz Ben Shabat

Boaz Ben Shabat

Senior AI Performance Engineer, Red Hat
I am an Engineer with extensive experience in optimizing large-scale, high-performance computing environments. My expertise includes network architecture, system performance tuning, and cloud infrastructure. I excel in solving complex technical challenges and improving efficiency... Read More →
Wednesday December 11, 2024 2:55pm - 3:30pm IST
Room 1
  AI + ML

3:45pm IST

Optimizing 5G Networks: Deploying AI/ML Workloads with the AIMLFW of O-RAN SC - Subhash Kumar Singh, Samsung
Wednesday December 11, 2024 3:45pm - 4:20pm IST
This session will explore the AI/ML Framework (AIMLFW) within O-RAN SC (O-RAN Software Community) community, designed for dynamic and efficient 5G network management. Key Topics: - Introduction to O-RAN SC and AIMLFW: Overview of O-RAN’s architecture and mission. - AI/ML Use Cases in O-RAN: Real-world applications like traffic prediction and anomaly detection, supported by AIMLFW’s scalable platform. - Architecture and Components of AIMLFW: * Kubeflow for Model Training * KServe for Model Deployment * O-RAN Specification for AI/ML Workload Deployment * Core ML Lifecycle Components - Challenges and Solutions in AI/ML Deployment: Addressing common challenges in distributed 5G environments. - Future Directions and Community Collaboration: Potential integration with Flyte and MLflow for enhanced AI/ML workflow management.
Speakers
avatar for Subhash Kumar Singh

Subhash Kumar Singh

Mr., Samsung
Subhash Kumar Singh is a Senior Chief Engineer at Samsung, where he leads the AI/ML Framework (AIMLFW) project within the O-RAN Software Community (SC). Over the years, Subhash has been actively involved in several prominent open-source communities. His extensive experience in these... Read More →
Wednesday December 11, 2024 3:45pm - 4:20pm IST
Room 1
  AI + ML

4:20pm IST

Coffee Break ☕
Wednesday December 11, 2024 4:20pm - 4:50pm IST
Wednesday December 11, 2024 4:20pm - 4:50pm IST
Room 1

4:50pm IST

PepsiCo’s Smart Edge Computing Delivers Anomaly Detection & Proactive Problem Solving to Boost Sales - Praseed Naduvath & Amit Mele, PepsiCo
Wednesday December 11, 2024 4:50pm - 5:25pm IST
In the era of digital transformation, PepsiCo is leading the way in integrating edge computing to ensure real-time data processing across its network. Utilizing lightweight Kubernetes solutions like K3s and RKE2, PepsiCo has built a platform that boosts computational capabilities at edge locations. Supported by Rancher and Longhorn, this platform enables efficient microservices deployment, providing the agility needed to meet dynamic market demands. A key component is the deployment of advanced ML models for camera and video inferences, which need substantial GPU resources. PepsiCo employs cutting-edge GPU sharing techniques to optimize these costly assets, improving performance and scalability while reducing costs. Join us to explore PepsiCo's edge computing strategy, its use of lightweight Kubernetes, and innovative GPU sharing techniques. Learn how PepsiCo is harnessing edge computing to drive operational excellence, sales growth and maintain a competitive edge.
Speakers
avatar for Amit Mele

Amit Mele

Amit M, PepsiCo
I currently hold the position of Deputy Director of Integration Engineering in PepsiCo with 17 years of experience, I specialize in platform engineering and application development. With certifications in CKA, CKS, K3S, and Edge Architect, I’ve spent 6 years in platform strategy... Read More →
avatar for Praseed Naduvath

Praseed Naduvath

Praseed Naduvath, PepsiCo
Praseed Naduvath is a techno-manager with over 18 years in IT, specializing in cloud infrastructure, container orchestration, and service mesh technologies. A Certified Kubernetes Administrator and Security Specialist, he excels in managing and securing complex Kubernetes environments... Read More →
Wednesday December 11, 2024 4:50pm - 5:25pm IST
Room 1
  AI + ML

5:40pm IST

Running GenAI Apps the Cloud Native Way - Arun Gupta, Intel
Wednesday December 11, 2024 5:40pm - 6:15pm IST
Enterprises are eager to adopt Generative AI to increase their productivity. OPEA, the Open Platform for Enterprise AI, is a new project with the Linux Foundation. It provides a framework of composable microservices for state-of-the-art GenAI systems including LLMs, data stores, and prompt engines. It provides blueprints of end-to-end workflows for popular usage such as ChatQnA, CodeGen, and RAG systems. OPEA applications leverage cloud-native architecture to simplify deployment. It even includes a friendly high-level pipeline definition language for deployment on Kubernetes. This session will introduce OPEA, key component microservices, and how GenAI applications can be composed using those microservices. The attendees will learn to start with OPEA by deploying their GenAI application on a k8s cluster. Explicit contribution opportunities will be shared with the attendees. You'll also see an open source OPEA playground running on k8s, and how to contribute your components to it.
Speakers
avatar for Arun Gupta

Arun Gupta

VP/GM, Open Ecosystem, Intel
Arun Gupta is vice president of Open Ecosystem Initiatives at Intel Corporation. He is an open source strategist, advocate, and practitioner for over two decades. He has taken companies such as Apple, Amazon, and Sun Microsystems through systemic changes to embrace open source principles... Read More →
Wednesday December 11, 2024 5:40pm - 6:15pm IST
Room 1
  AI + ML
  • Content Experience Level Any
 
  • Filter By Date
  • Filter By Venue
  • Filter By Type
  • Content Experience Level
  • Timezone


Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.
Filtered by Date -