# llama **Repository Path**: mirrors_cloudera/llama ## Basic Information - **Project Name**: llama - **Description**: Llama - Low Latency Application MAster - **Primary Language**: Unknown - **License**: Apache-2.0 - **Default Branch**: cdh5-1.0.0 - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2020-08-08 - **Last Updated**: 2025-12-20 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README Llama ${project.version} Llama is a Yarn Application Master that mediates the management and monitoring of cluster resources between Impala and Yarn. Llama provides a Thrift API for Impala to request and release allocations outside of Yarn-managed container processes. For details on how to build Llama refer to the BUILDING.txt file. For details on how to use Llama please refer to Llama documentation.