OpenUSD v24.11 is now available on GitHub, and its core non-imaging libraries can be installed via PyPI with the command line.
1 |
pip install usd-core |
This release is chock-full of improvements for developers and creators alike from all industries. Fulfilling the promise of OpenUSD as a powerhouse for 3D collaboration is a team effort, and v24.11 represents an ever-broadening swath of efforts around the world to share stories and technology.
Read more below to see how we are all converging on OpenUSD’s paradigms to further a data ecosystem that is as welcoming, secure, and immersive as both the physical world and its virtual representations must be.
Removing Boost Dependency
Developers can now build OpenUSD, including its Python bindings, without depending on boost. This greatly simplifies and improves not just the build and packaging experience, addressing a major longstanding pain point reported by developers, but also integration of OpenUSD with downstream components that may still have their own boost dependencies.
OpenUSD’s build script will continue to compile and link boost when OpenVDB and OpenImageIO support are enabled, as those upstream components still require boost either in their headers, or in their build processes. However, developers no longer need to reconcile a boost version from within OpenUSD itself with those of OpenVDB and OpenImageIO.
OpenUSD now uses its own internal pxr_boost::python library for generating Python bindings instead of boost::python. Please see the Changelog for more details on maintaining interoperability with other Python bindings and report any unexpected issues to GitHub.
Performance Benchmarking
Beginning with v24.11, every release of OpenUSD will publish performance benchmarks. These metrics, assessed on public datasets with known hardware and software configurations, provide a baseline for apples-to-apples comparisons across the ecosystem for key performance indicators such as time to open the stage and time to render the first image — i.e., how long is it expected for a user to wait before they can inspect the scene?
Security Advisory
Security is important to OpenUSD and AOUSD. Potential OpenUSD vulnerabilities can now safely be reported via its Security Advisory on GitHub or via [email protected]. Efforts are made to respond to and address reports within a timely manner, recognizing the security requirements for various domains, including industrial manufacturing and public sector use cases. Please check out aousd.org/security for more details.
UsdSemantics for Ground Truth Labeling
OpenUSD now supports semantic labels via UsdSemantics schema. This expands the OpenUSD ecosystem for synthetic 3D data generation, a critical step in bootstrapping development and training of perception AI models for applications like physical AI-powered robotics.
Semantic labels and segmentation also ascribe meaning to objects identifiable by AI in the scene, laying the groundwork for content specifications that ensure virtual objects in industrial digital twins exhibit all the capabilities required by their physical equivalents.
AVIF in USDZ
The AV1 Image File Format (AVIF) is now in the official list of allowed image file types in USDZ, following the work for the hioAvif imaging backend in OpenUSD v24.08.
The AV1 Image Format is a royalty-free open-source picture format, with modern compression, flexible color specification, high dynamic range values, depth images and alpha channels, and support for layered and sequential images.
The smaller file sizes without compromising quality and representable color gamut make AVIF a solid option for encoding textures in scenes for delivery onto mobile devices for Augmented Reality experiences.
Imaging Support for iOS and visionOS
OpenUSD now supports imaging on iOS and visionOS. Developers for those platforms can leverage the same Hydra paradigms for rendering as anyone else in the OpenUSD ecosystem.
This continues opening up more paths for content creators to work with pipeline developers in more broadly validating the appearance and behavior of their assets across all platforms that support OpenUSD.
Validation Framework Improvements
The validation framework continues to improve the introspectibility of USD content for developers and creators alike. This release adds more context to validation errors and identifiers to help distinguish between the different types of errors that may be reported from a given set of rules.
Reasoning about even the most concise validation report is not easy when it typically spans so many domains, like OpenUSD itself. This release further defines the “tokens” (identifiers, types, and contextual specifics) at the validation error level that form the basis with which humans and AIs can intuit and action the appropriate correctives — what does this error mean, and how does it affect the objects and operations that this 3D scene is intended to represent?
This release also adds many Python bindings to the validation framework, making it more convenient for developers to invoke C++ rules from Python and enhance the legibility of USD content and pipelines for stakeholders of all disciplines.
PEGTL USDA Parser
Human-readable USD content (i.e., the USDA file format) is now parsed in OpenUSD using the Parsing Expression Grammar Template Library (PEGTL). This is a more modern and modular implementation of the USDA grammar than the previous lex/yacc incarnation.
In addition to performance improvements, this implementation is easier for developers to read, as each parsing rule has a clear localized context. This work has also been reflected in our working drafts of the AOUSD Core Specification, where the working group has experienced firsthand how much simpler it is to reason about the grammar in these terms.
Check out the full release notes on GitHub.
In addition, Pixar’s SIGGRAPH 2024 Birds of a Feather slides have been added as a PDF to the Downloads page.
Interested in learning more about OpenUSD? Take courses in NVIDIA’s free Learn OpenUSD series.
If your company would like to join the Alliance for OpenUSD, sign up to become a member. Follow AOUSD on Facebook, Instagram, LinkedIn, X, and YouTube, and get support from our community of artists, designers, and developers in our forum.