“I had the pleasure of reporting to Angus. As a manager he trusts you and also lets you know how you should read and understand him. Provides you the feedback which you need to sustain/grow. This makes it so easy and comfortable working with him. He listens to you and guides you. Technically sound. Quality and time conscious.”
Activity
-
"Expressing my gratitude for the incredible opportunity to present our data and AI journey at Rivian—it's truly an exhilarating future and another…
"Expressing my gratitude for the incredible opportunity to present our data and AI journey at Rivian—it's truly an exhilarating future and another…
Liked by Angus Yeung
-
XR is ready for take-off! Exciting times for Quintar, Inc and all of our colleagues and partners in the XR space!
XR is ready for take-off! Exciting times for Quintar, Inc and all of our colleagues and partners in the XR space!
Liked by Angus Yeung
-
Reach out to Dr. Yang if you have strong AI research background and passion for deploying cutting edge AI technologies for software-defined vehicles.
Reach out to Dr. Yang if you have strong AI research background and passion for deploying cutting edge AI technologies for software-defined vehicles.
Shared by Angus Yeung
Experience
Education
-
Doctoral thesis on research of Bayesian non-rigid motion modeling and estimation. Research interests include computer vision, machine learning, ultrasonic imaging, image and video processing. Won Michael Merickel Best Paper Award, SPIE Conference.
-
Activities and Societies: Hong Kong C. W. Chu Scholarship (香港朱敬文愽士獎學金), Genesee Scholarship, Phi Beta Kappa National Honor Society, Tau Beta Pi Honor Engineering Society
Volunteer Experience
-
Managing Director, Founder
Xu and Yeung Foundation
- Present 4 years 5 months
Science and Technology
Private foundation for promoting equality in America and helping people improve quality of life through technologies.
Publications
-
DELIVERING OBJECT-BASED IMMERSIVE MEDIA EXPERIENCES IN SPORTS
ITU Journal: ICT Discoveries
Immersive media technology in sports enables fans to experience interactive, personalized content. A fan can experience the action in six degrees of freedom (6DoF), through the eyes of a player or from any desired perspective. Intel Sports makes deploying these immersive media experiences a reality by transforming the captured content from the cameras installed in the stadium into preferred volumetric formats used for compressing and streaming the video feed, as well as for decoding and…
Immersive media technology in sports enables fans to experience interactive, personalized content. A fan can experience the action in six degrees of freedom (6DoF), through the eyes of a player or from any desired perspective. Intel Sports makes deploying these immersive media experiences a reality by transforming the captured content from the cameras installed in the stadium into preferred volumetric formats used for compressing and streaming the video feed, as well as for decoding and rendering desired viewports to fans’ many devices. Object-based immersive coding enables innovative use cases where the streaming bandwidth can be better allocated to the objects of interest. The Moving Picture Experts Group (MPEG) is developing immersive codecs for streaming immersive video and point clouds. In this paper, we explain how to implement object-based coding in MPEG metadata for immersive video (MIV) and video-based point-cloud coding (V-PCC) along with the enabled experiences.
-
DASH-based Signaling of Recommended Viewport Information
MPEG International Standards
ISO/IEC JTC1/SC29/WG11 MPEG/m50654, October 2019, Geneva, CH.
-
Delivering Live Immersive Media Experiences in Sports
VR Industry Forum
-
Object-Based Applications for Immersive Video Coding
MPEG International Standards
ISO/IEC JTC1/SC29/WG11 MPEG/m50949, October 2019, Geneva, CH.
-
Object-based Applications for Video Point Cloud Compression
MPEG International Standards
ISO/IEC JTC1/SC29/WG11 MPEG/m50950, October 2019, Geneva, CH
-
On Client Feedback Signaling of Viewport Information
MPEG International Standards
ISO/IEC JTC1/SC29/WG11 MPEG/m50655, October 2019, Geneva, CH.
-
SEI Messages for MIV and V-PCC
MPEG International Standards
ISO/IEC JTC1/SC29/WG11 MPEG/m49957, October 2019, Geneva, CH.
-
Hands-On Server-Side Web Development with Swift: Build dynamic web apps by leveraging two popular Swift web frameworks: Vapor 3.0 and Kitura 2.5
Packt Publishing
This book is about building professional web applications and web services using Swift 4.0 and leveraging two popular Swift web frameworks: Vapor 3.0 and Kitura 2.5. In the first part of this book, we'll focus on the creation of basic web applications from Vapor and Kitura boilerplate projects. As the web apps start out simple, more useful techniques, such as unit test development, debugging, logging, and the build and release process, will be introduced to readers.
In the second part…This book is about building professional web applications and web services using Swift 4.0 and leveraging two popular Swift web frameworks: Vapor 3.0 and Kitura 2.5. In the first part of this book, we'll focus on the creation of basic web applications from Vapor and Kitura boilerplate projects. As the web apps start out simple, more useful techniques, such as unit test development, debugging, logging, and the build and release process, will be introduced to readers.
In the second part, we'll learn different aspects of web application development with server-side Swift, including setting up routes and controllers to process custom client requests, working with template engines such as Leaf and Stencil to create dynamic web content, beautifying the content with Bootstrap, managing user access with authentication framework, and leveraging the Object Relational Mapping (ORM) abstraction layer (Vapor's Fluent and Kitura's Kuery) to perform database operations.
Finally, in the third part, we'll develop web services in Swift and build our API Gateway, microservices and database backend in a three-tier architecture design. Readers will learn how to design RESTful APIs, work with asynchronous processes, and leverage container technology such as Docker in deploying microservices to cloud hosting services such as Vapor Cloud and IBM Cloud. -
Hardware Accelerated Motion Estimation for Non-rigid Motion Field
Proceedings of the Fifth International Workshop on Video Processing and Quality Metrics for Consumer Electronics (VPQM)
Patents
-
Point cloud playback mechanism
Issued 11928845
An apparatus to facilitate real-time playback of point cloud sequence data is disclosed. The apparatus comprises one or more processors to receive point cloud data of a captured scene, decompose the point cloud data into a plurality of point cloud patches, wherein each point cloud patch is associated with an object in the scene and includes contextual information regarding the point cloud patch, encode each of the point cloud patches via a deep-learning based algorithm to generate encoded point…
An apparatus to facilitate real-time playback of point cloud sequence data is disclosed. The apparatus comprises one or more processors to receive point cloud data of a captured scene, decompose the point cloud data into a plurality of point cloud patches, wherein each point cloud patch is associated with an object in the scene and includes contextual information regarding the point cloud patch, encode each of the point cloud patches via a deep-learning based algorithm to generate encoded point cloud patches, receive a viewpoint selection from a client, assign a priority to data chunks within each encoded point cloud patch based on the viewpoint selection and the contextual information and transmit the data chunks to the client based on the assigned priority.
-
Immersive video coding using object metadata
Issued 11902540
Methods, apparatus, systems and articles of manufacture for video coding using object metadata are disclosed. An example apparatus includes an object separator to separate input views into layers associated with respective objects to generate object layers for geometry data and texture data of the input views, a pruner to project the first object layer of a first basic view of the at least one basic views against the first object layer of a first additional view of the at least one additional…
Methods, apparatus, systems and articles of manufacture for video coding using object metadata are disclosed. An example apparatus includes an object separator to separate input views into layers associated with respective objects to generate object layers for geometry data and texture data of the input views, a pruner to project the first object layer of a first basic view of the at least one basic views against the first object layer of a first additional view of the at least one additional views to generate a first pruned view and a first pruning mask, a patch packer to tag a patch with an object identifier of the first object, the patch corresponding to the first pruning mask, and an atlas generator to generate at least one atlas to include in encoded video data, the atlas including the patch.
-
Methods for Viewport-dependent Adaptive Streaming of Point Cloud Content
Issued 11831861
Embodiments herein provide mechanisms for viewport dependent adaptive streaming of point cloud content. For example, a user equipment (UE) may receive a media presentation description (MPD) for point cloud content in a dynamic adaptive streaming over hypertext transfer protocol (DASH) format. The MPD may include viewport information for a plurality of recommended viewports and indicate individual adaptation sets of the point cloud content that are associated with the respective recommended…
Embodiments herein provide mechanisms for viewport dependent adaptive streaming of point cloud content. For example, a user equipment (UE) may receive a media presentation description (MPD) for point cloud content in a dynamic adaptive streaming over hypertext transfer protocol (DASH) format. The MPD may include viewport information for a plurality of recommended viewports and indicate individual adaptation sets of the point cloud content that are associated with the respective recommended viewports. The UE may select a first viewport from the plurality of recommended viewports (e.g., based on viewport data that indicates a current viewport of the user and/or a user-selected viewport). The UE may request one or more representations of a first adaptation set, of the adaptation sets, that corresponds to the first viewport. Other embodiments may be described and claimed.
-
Video quality measurement for virtual cameras in volumetric immersive media
Issued 11748870
Apparatus and method for determining a quality score for virtual video cameras. For example, one embodiment comprises: a region of interest (ROI) detector to detect regions of interest within a first image generated from a first physical camera (PCAM) positioned at first coordinates; virtual camera circuitry and/or logic to generate a second image positioned at the first coordinates; image comparison circuitry and/or logic to establish pixel-to-pixel correspondence between the first image and…
Apparatus and method for determining a quality score for virtual video cameras. For example, one embodiment comprises: a region of interest (ROI) detector to detect regions of interest within a first image generated from a first physical camera (PCAM) positioned at first coordinates; virtual camera circuitry and/or logic to generate a second image positioned at the first coordinates; image comparison circuitry and/or logic to establish pixel-to-pixel correspondence between the first image and the second image; an image quality evaluator to determine a quality value for the second image by evaluating the second image in view of the first image.
-
Dash-based streaming of point cloud content based on recommended viewports
Issued 11729243
Various embodiments herein provide adaptive streaming mechanisms for distribution of point cloud content. The point cloud content may include immersive media content in a dynamic adaptive streaming over hypertext transfer protocol (DASH) format. Various embodiments provide DASH-based mechanisms to support viewport indication during streaming of volumetric point cloud content. Other embodiments may be described and claimed.
-
Apparatus and System for Virtual Camera Configuration and Selection
Issued 11706375
A system and method for virtual camera configuration and selection.
-
Sensor Data Transmissions
Issued 11678810
(2nd granted patent from the same application) Technology for a wearable heart rate monitoring device is disclosed. The wearable heart rate monitoring device can include a heart rate sensor operable to collect sensor data, a modulator operable to generate a modulated signal that includes the sensor data, a housing configured to engage a body feature or surface in a manner that allows for heart rate detection, and a communication module configured to transmit the sensor data in the modulated…
(2nd granted patent from the same application) Technology for a wearable heart rate monitoring device is disclosed. The wearable heart rate monitoring device can include a heart rate sensor operable to collect sensor data, a modulator operable to generate a modulated signal that includes the sensor data, a housing configured to engage a body feature or surface in a manner that allows for heart rate detection, and a communication module configured to transmit the sensor data in the modulated signal to a mobile computing device via a wired connection that is power limited. The mobile computing device is typically configured to demodulate the modulated signal in order to extract the sensor data.
-
Automatic response system for wearables
Issued 11605007
(2nd granted patent from the application) One embodiment provides an apparatus. The apparatus includes a wearable device. The wearable device includes a knowledge base, a user interface and automatic response logic. The knowledge base includes at least one data structure. Each data structure includes a plurality of ranked possible user responses. The automatic response logic is to select one data structure of the at least one data structure in response to a received communication. The selecting…
(2nd granted patent from the application) One embodiment provides an apparatus. The apparatus includes a wearable device. The wearable device includes a knowledge base, a user interface and automatic response logic. The knowledge base includes at least one data structure. Each data structure includes a plurality of ranked possible user responses. The automatic response logic is to select one data structure of the at least one data structure in response to a received communication. The selecting is based, at least in part, on an event type and based, at least in part, on a contact identifier. The communication is received from a communication partner device via a companion device. The automatic response logic is further to provide at least one ranked possible user response from the selected data structure to a user via the user interface.
Other inventorsSee patent -
REGULATING COMMUNICATION BETWEEN A VEHICLE AND A USER DEVICE
Issued 20240172309
Systems and methods are provided for initiating first instructions to establish a communication session between a user device and a vehicle, wherein the first instructions include a wait time interval and the communication session is associated with a short-range wireless communication protocol. The system and methods may determine a number of unsuccessful attempts for establishing the communication session exceeds a threshold value, and in response to determining that the number exceeds the…
Systems and methods are provided for initiating first instructions to establish a communication session between a user device and a vehicle, wherein the first instructions include a wait time interval and the communication session is associated with a short-range wireless communication protocol. The system and methods may determine a number of unsuccessful attempts for establishing the communication session exceeds a threshold value, and in response to determining that the number exceeds the threshold value, generate second instructions to establish the communication session, wherein the second instructions include a modification to the wait time interval. The second instructions may be initiated to establish the communication session between the user device and the vehicle.
-
Pre-stitching tuning automation for panoramic VR applications
Issued 11457193
Methods, systems and apparatuses may provide for technology that identifies a seam area between a pair of images corresponding to a first eye and determines a disparity between the seam area and a reference area at a center line of a reference image corresponding to a second eye. The technology may also automatically adjust one or more pre-stitch parameters of camera sensors associated with the pair of images and the reference image based on the disparity.
-
Systems and methods for virtual camera configuration
Issued 11443138
A virtual camera configuration system includes any number of cameras disposed about an area, such as an event venue. The system also includes at least one processor and at least one non-transitory, computer-readable medium communicatively coupled to the at least one processor. In certain embodiments, the at least one non-transitory, computer-readable medium is configured to store instructions which, when executed, cause the processor to perform operations including receiving a set of game data,…
A virtual camera configuration system includes any number of cameras disposed about an area, such as an event venue. The system also includes at least one processor and at least one non-transitory, computer-readable medium communicatively coupled to the at least one processor. In certain embodiments, the at least one non-transitory, computer-readable medium is configured to store instructions which, when executed, cause the processor to perform operations including receiving a set of game data, receiving a set of audiovisual data, and receiving a set of camera presets. The operations also include generating a set of training data and training a model based on the set of training data. The operations also include generating, using the model on a second set of game data and a second set of audiovisual data, a second set of camera presets associated with the set of virtual cameras.
-
Panoramic virtual reality framework providing a dynamic user experience
Issued 11381739
An apparatus, system, and method are described for providing real-time capture, processing, and distribution of panoramic virtual reality (VR) content of a live event. One or more triggering events are identified and used to generate graphics and/or audio on client VR devices. For example, one embodiment of a method comprises: capturing video of an event at an event venue with a plurality of cameras to produce a corresponding plurality of video streams; generating a virtual reality (VR) stream…
An apparatus, system, and method are described for providing real-time capture, processing, and distribution of panoramic virtual reality (VR) content of a live event. One or more triggering events are identified and used to generate graphics and/or audio on client VR devices. For example, one embodiment of a method comprises: capturing video of an event at an event venue with a plurality of cameras to produce a corresponding plurality of video streams; generating a virtual reality (VR) stream based on the plurality of video streams; transmitting the VR stream to a plurality of client VR devices, wherein the client VR devices are to render VR environments based on the VR stream; detecting a triggering event during the event; and transmitting an indication of the triggering event to the plurality of client VR devices, wherein a first client VR device is to generate first event-based graphics and/or first event-based audio in accordance with the indication.
-
Sensor data management for multiple smart devices
Issued 11317832
One embodiment relates to an apparatus, comprising logic, at least partially incorporated into hardware, to: receive first sensor data associated with a first sensor of a first smart device; determine a first reliability factor associated with the first sensor data; receive second sensor data associated with a second sensor of a second smart device; and determine a second reliability factor associated with the second sensor data. The logic is further to determine a sensor data reporting plan…
One embodiment relates to an apparatus, comprising logic, at least partially incorporated into hardware, to: receive first sensor data associated with a first sensor of a first smart device; determine a first reliability factor associated with the first sensor data; receive second sensor data associated with a second sensor of a second smart device; and determine a second reliability factor associated with the second sensor data. The logic is further to determine a sensor data reporting plan based upon the first reliability factor and the second reliability factor, the sensor data reporting plan indicating whether each of the first sensor and the second sensor are to subsequently send their respective sensor data to a primary communication device.
-
Virtual Skycam System
Issued 11,185,755
A system includes at least one processor and at least one non-transitory computer-readable media communicatively coupled to the at least one processor. In some embodiments, the at least one non-transitory computer-readable media stores instructions which, when executed, cause the processor to perform operations including receiving a first set of sensor data within a first time frame and receiving a set of skycam actions within the first time frame. In certain embodiments, the operations also…
A system includes at least one processor and at least one non-transitory computer-readable media communicatively coupled to the at least one processor. In some embodiments, the at least one non-transitory computer-readable media stores instructions which, when executed, cause the processor to perform operations including receiving a first set of sensor data within a first time frame and receiving a set of skycam actions within the first time frame. In certain embodiments, the operations also include generating a set of reference actions corresponding to the first set of sensor data and the set of skycam actions. In some embodiments, the operations also include receiving a second set of sensor data associated with a second game status, a second game measurement, or both. The operations also include generating a sequence of skycam actions based on a comparison between the second set of sensor data and the set of reference actions.
Other inventorsSee patent -
Simulated Previews of Dynamic Virtual Camera
Issued US 10994202
The present disclosure includes a method for generating simulated previews of dynamic virtual cameras, the method comprising receiving virtual camera descriptor data, receiving object tracking data, generating virtual camera behavior data based on the virtual camera descriptor data and the object tracking data, the virtual camera behavioral data corresponding to virtual camera parameters for rendering a view, and generating a simulated preview based on the object tracking data and the virtual…
The present disclosure includes a method for generating simulated previews of dynamic virtual cameras, the method comprising receiving virtual camera descriptor data, receiving object tracking data, generating virtual camera behavior data based on the virtual camera descriptor data and the object tracking data, the virtual camera behavioral data corresponding to virtual camera parameters for rendering a view, and generating a simulated preview based on the object tracking data and the virtual camera behavioral data.
-
Scene construction using object-based immersive media
Filed US 20210105451
Various embodiments herein provide techniques for scene construction using object based immersive media. Other embodiments may be described and claimed.
-
System and method for view optimized 360 degree virtual reality video streaming
Issued 11166067
An approach for streaming a coded virtual reality (VR) video stream including receiving a segments of the coded VR video stream; storing the segments in a playback buffer; based on determining that a current playback time is within a threshold time of a playback time of a buffered segment, that a current duration of the playback buffer is larger than a threshold duration, and that a current bandwidth is larger than a threshold bandwidth, and that a current viewport is different from a previous…
An approach for streaming a coded virtual reality (VR) video stream including receiving a segments of the coded VR video stream; storing the segments in a playback buffer; based on determining that a current playback time is within a threshold time of a playback time of a buffered segment, that a current duration of the playback buffer is larger than a threshold duration, and that a current bandwidth is larger than a threshold bandwidth, and that a current viewport is different from a previous viewport, storing at least one refined tile corresponding to the current viewport in the playback buffer; constructing a frame based on the buffered segment and the at least one refined tile corresponding to the current viewport; and decoding the coded VR video stream based on the constructed frame.
-
Methods for Viewport-dependent Adaptive Streaming of Point Cloud Content
Filed US 20200382764
Embodiments herein provide mechanisms for viewport dependent adaptive streaming of point cloud content. For example, a user equipment (UE) may receive a media presentation description (MPD) for point cloud content in a dynamic adaptive streaming over hypertext transfer protocol (DASH) format. The MPD may include viewport information for a plurality of recommended viewports and indicate individual adaptation sets of the point cloud content that are associated with the respective recommended…
Embodiments herein provide mechanisms for viewport dependent adaptive streaming of point cloud content. For example, a user equipment (UE) may receive a media presentation description (MPD) for point cloud content in a dynamic adaptive streaming over hypertext transfer protocol (DASH) format. The MPD may include viewport information for a plurality of recommended viewports and indicate individual adaptation sets of the point cloud content that are associated with the respective recommended viewports. The UE may select a first viewport from the plurality of recommended viewports (e.g., based on viewport data that indicates a current viewport of the user and/or a user-selected viewport). The UE may request one or more representations of a first adaptation set, of the adaptation sets, that corresponds to the first viewport. Other embodiments may be described and claimed.
-
SMART DEVICE FOR NOTIFICATION LOOPBACK ROUTING TO A PRIMARY COMMUNICATION DEVICE
Issued US 20170289789
One embodiment relates to an apparatus, comprising logic, at least partially incorporated
into hardware, to receive a notification message from a primary communication device by a smart device using a first communication protocol, the notification message including notification information received at an operating system layer of the primary communication device; determine, by the smart device, whether the notification message meets predetermined criteria; and responsive to a determination…One embodiment relates to an apparatus, comprising logic, at least partially incorporated
into hardware, to receive a notification message from a primary communication device by a smart device using a first communication protocol, the notification message including notification information received at an operating system layer of the primary communication device; determine, by the smart device, whether the notification message meets predetermined criteria; and responsive to a determination that the notification message meets the predetermined criteria, send, by the smart device, a loopback notification message including a representation of at least a portion of the notification information to the primary communication device using a second communication protocol. -
Point Cloud Playback Mechanism
Filed US 20210042964
An apparatus to facilitate real-time playback of point cloud sequence data is disclosed. The apparatus comprises one or more processors to receive point cloud data of a captured scene, decompose the point cloud data into a plurality of point cloud patches, wherein each point cloud patch is associated with an object in the scene and includes contextual information regarding the point cloud patch, encode each of the point cloud patches via a deep-learning based algorithm to generate encoded point…
An apparatus to facilitate real-time playback of point cloud sequence data is disclosed. The apparatus comprises one or more processors to receive point cloud data of a captured scene, decompose the point cloud data into a plurality of point cloud patches, wherein each point cloud patch is associated with an object in the scene and includes contextual information regarding the point cloud patch, encode each of the point cloud patches via a deep-learning based algorithm to generate encoded point cloud patches, receive a viewpoint selection from a client, assign a priority to data chunks within each encoded point cloud patch based on the viewpoint selection and the contextual information and transmit the data chunks to the client based on the assigned priority.
-
System and Apparatus for User Controlled Virtual Camera for Volumetric Video
Filed US 20200388068
Apparatus, system, and method for rendering an immersive virtual reality environment of an event. For example, one embodiment of a system comprises: a video decoder to decode video data captured from a plurality of different cameras at an event to generate decoded video, the decoded video comprising a plurality of video images captured from each of the plurality of different cameras; image image recognition hardware logic to performing image recognition on at least a portion of the video to…
Apparatus, system, and method for rendering an immersive virtual reality environment of an event. For example, one embodiment of a system comprises: a video decoder to decode video data captured from a plurality of different cameras at an event to generate decoded video, the decoded video comprising a plurality of video images captured from each of the plurality of different cameras; image image recognition hardware logic to performing image recognition on at least a portion of the video to identify objects within the plurality of video images; a metadata generator to associate metadata with one or more of the objects; a point cloud data generator to generate point cloud data based on the decoded video, the point cloud data usable to render an immersive virtual reality (VR) environment for the event; and a network interface to transmit the point cloud data or VR data derived from the point cloud data to a client device.
-
Touch gesture detection assessment
Issued US 10488975
Embodiments are directed to gesture recognition in a computing device. Touch-based input by the user is monitored based on an output from a touch sensor. Gestures are directed from among the touch-based input. The detected gestures are analyzed to assign gesture characteristic profiles to the detected gestures according to profiling criteria. A sequential event log is tabulated representing counts of series of gestures based on assigned characteristic profiles and on temporal sequencing of the…
Embodiments are directed to gesture recognition in a computing device. Touch-based input by the user is monitored based on an output from a touch sensor. Gestures are directed from among the touch-based input. The detected gestures are analyzed to assign gesture characteristic profiles to the detected gestures according to profiling criteria. A sequential event log is tabulated representing counts of series of gestures based on assigned characteristic profiles and on temporal sequencing of the gestures. Circumstances for invocation of gesture detection re-calibration are assessed based on the tabulated series of gestures.
-
AUTOMATIC RESPONSE SYSTEM FOR WEARABLES
Issued 10460244
One embodiment provides an apparatus. The apparatus includes a wearable device. The wearable device includes a knowledge base, a user interface and automatic response logic. The knowledge base includes at least one data structure. Each data structure includes a plurality of ranked possible user responses. The automatic response logic is to select one data structure of the at least one data structure in response to a received communication. The selecting is based, at least in part, on an…
One embodiment provides an apparatus. The apparatus includes a wearable device. The wearable device includes a knowledge base, a user interface and automatic response logic. The knowledge base includes at least one data structure. Each data structure includes a plurality of ranked possible user responses. The automatic response logic is to select one data structure of the at least one data structure in response to a received communication. The selecting is based, at least in part, on an event type and based, at least in part, on a contact identifier. The communication is received from a communication partner device via a companion device. The automatic response logic is further to provide at least one ranked possible user response from the selected data structure to a user via the user interface.
Other inventorsSee patent -
Wearable device command regulation
Issued US 10448358
(3rd granted patent from the same application.) Systems and methods for regulating alerts in a wearable device are disclosed. The alerts may be generated from a mobile device or a wearable device communicatively coupled to the mobile device. The system may include an alert storage module that receives alerts of various types, and generate a plurality of alert heaps each including respective one or more alerts. The system may determine for an alert a respective cost value associated with issuing…
(3rd granted patent from the same application.) Systems and methods for regulating alerts in a wearable device are disclosed. The alerts may be generated from a mobile device or a wearable device communicatively coupled to the mobile device. The system may include an alert storage module that receives alerts of various types, and generate a plurality of alert heaps each including respective one or more alerts. The system may determine for an alert a respective cost value associated with issuing a notification of the alert. The alert heaps may be merged to produce a cost-biased leftist heap including prioritized alerts based on the cost values of the alerts. The system may generate a queue of notification commands based on the prioritized alerts, and transmit the commands to the wearable device.
Other inventorsSee patent -
USER PATTERN RECOGNITION AND PREDICTION SYSTEM FOR WEARABLES
Issued US 10,410,129
One embodiment provides an apparatus. The apparatus includes a companion device. The companion device includes pattern recognition logic to construct a reference graph model based, at least in part, on a plurality of events captured from at least one of the companion device and a wearable device. The reference graph model includes at least one path, each path including one trigger node, at least one event node and a respective edge incident to each event node, a first edge coupling the…
One embodiment provides an apparatus. The apparatus includes a companion device. The companion device includes pattern recognition logic to construct a reference graph model based, at least in part, on a plurality of events captured from at least one of the companion device and a wearable device. The reference graph model includes at least one path, each path including one trigger node, at least one event node and a respective edge incident to each event node, a first edge coupling the trigger node and a first event node, a weight associated with each edge corresponding to a likelihood that a second event will follow a first event within a minimum trigger time interval.
-
Method and apparatus for processing and distributing live virtual reality content
Filed US 16/958,698
An apparatus, system, and method are described for providing real-time capture, processing, and distribution of panoramic virtual reality (VR) content. For example, one embodiment of a graphics processor comprises a video interface to receive a first plurality of images from a corresponding first plurality of cameras; an image rectifier to perform a perspective re-projection of at least some of the first plurality of images to a common image plane to generate a rectified first plurality of…
An apparatus, system, and method are described for providing real-time capture, processing, and distribution of panoramic virtual reality (VR) content. For example, one embodiment of a graphics processor comprises a video interface to receive a first plurality of images from a corresponding first plurality of cameras; an image rectifier to perform a perspective re-projection of at least some of the first plurality of images to a common image plane to generate a rectified first plurality of images; a stitcher to analyze overlapping regions of adjacent images in the rectified first plurality and to identify corresponding pixels in the overlapping regions and to stitch the adjacent images in accordance with the corresponding pixels to generate a panoramic image comprising a stitched combination of the rectified first plurality of images; and a cylindrical projector to project the panoramic image onto a cylindrical surface to generate a final panoramic video image to be used to implement a virtual reality (VR) environment on a VR apparatus.
-
Hybrid Real-time Playback and Progressive Download of Point Cloud Sequence Data in Graphics Computing Environments
Issued US 62/884,949
-
SENSOR DATA TRANSMISSIONS
Issued US 10,292,607
Technology for a wearable heart rate monitoring device is disclosed. The wearable heart rate monitoring device can include a heart rate sensor operable to collect sensor data, a modulator operable to generate a modulated signal that includes the sensor data, a housing configured to engage a body feature or surface in a manner that allows for heart rate detection, and a communication module configured to transmit the sensor data in the modulated signal to a mobile computing device via a wired…
Technology for a wearable heart rate monitoring device is disclosed. The wearable heart rate monitoring device can include a heart rate sensor operable to collect sensor data, a modulator operable to generate a modulated signal that includes the sensor data, a housing configured to engage a body feature or surface in a manner that allows for heart rate detection, and a communication module configured to transmit the sensor data in the modulated signal to a mobile computing device via a wired connection that is power limited. The mobile computing device is typically configured to demodulate the modulated signal in order to extract the sensor data.
-
Carriage of Quality for Point Cloud Data
Filed US 62/946,855
-
Client signaling scheme for Viewport-dependent Adaptive Streaming of Point Cloud Content
Filed US 62/903,616
-
DASH-based Streaming of Point Cloud Content Based on Recommended Viewports
Filed US 17/027,524
Various embodiments herein provide adaptive streaming mechanisms for distribution of point cloud content. The point cloud content may include immersive media content in a dynamic adaptive streaming over hypertext transfer protocol (DASH) format. Various embodiments provide DASH-based mechanisms to support viewport indication during streaming of volumetric point cloud content. Other embodiments may be described and claimed.
-
Geographically Distributed Real Time Media Processing System for Volumetric Video Distribution
Filed US 0
-
PANORAMIC VIRTUAL REALITY FRAMEWORK PROVIDING A DYNAMIC USER EXPERIENCE
Filed US 20200236278
An apparatus, system, and method are described for providing real-time capture, processing, and distribution of panoramic virtual reality (VR) content of a live event. One or more triggering events are identified and used to generate graphics and/or audio on client VR devices. For example, one embodiment of a method comprises: capturing video of an event at an event venue with a plurality of cameras to produce a corresponding plurality of video streams; generating a virtual reality (VR) stream…
An apparatus, system, and method are described for providing real-time capture, processing, and distribution of panoramic virtual reality (VR) content of a live event. One or more triggering events are identified and used to generate graphics and/or audio on client VR devices. For example, one embodiment of a method comprises: capturing video of an event at an event venue with a plurality of cameras to produce a corresponding plurality of video streams; generating a virtual reality (VR) stream based on the plurality of video streams; transmitting the VR stream to a plurality of client VR devices, wherein the client VR devices are to render VR environments based on the VR stream; detecting a triggering event during the event; and transmitting an indication of the triggering event to the plurality of client VR devices, wherein a first client VR device is to generate first event-based graphics and/or first event-based audio in accordance with the indication.
-
Pre-Stitching Tuning Automation for Panoramic VR Applications
Filed US 20200329223
Methods, systems and apparatuses may provide for technology that identifies a seam area between a pair of images corresponding to a first eye and determines a disparity between the seam area and a reference area at a center line of a reference image corresponding to a second eye. The technology may also automatically adjust one or more pre-stitch parameters of camera sensors associated with the pair of images and the reference image based on the disparity.
-
Systems and Methods for Virtual Camera Configuration
Filed US 20200265269
A virtual camera configuration system includes any number of cameras disposed about an area, such as an event venue. The system also includes at least one processor and at least one non-transitory, computer-readable medium communicatively coupled to the at least one processor. In certain embodiments, the at least one non-transitory, computer-readable medium is configured to store instructions which, when executed, cause the processor to perform operations including receiving a set of game data,…
A virtual camera configuration system includes any number of cameras disposed about an area, such as an event venue. The system also includes at least one processor and at least one non-transitory, computer-readable medium communicatively coupled to the at least one processor. In certain embodiments, the at least one non-transitory, computer-readable medium is configured to store instructions which, when executed, cause the processor to perform operations including receiving a set of game data, receiving a set of audiovisual data, and receiving a set of camera presets. The operations also include generating a set of training data and training a model based on the set of training data. The operations also include generating, using the model on a second set of game data and a second set of audiovisual data, a second set of camera presets associated with the set of virtual cameras.
-
VIRTUAL SKYCAM SYSTEM
Filed US 20200222783
A system includes at least one processor and at least one non-transitory computer-readable media communicatively coupled to the at least one processor. In some embodiments, the at least one non-transitory computer-readable media stores instructions which, when executed, cause the processor to perform operations including receiving a first set of sensor data within a first time frame and receiving a set of skycam actions within the first time frame. In certain embodiments, the operations also…
A system includes at least one processor and at least one non-transitory computer-readable media communicatively coupled to the at least one processor. In some embodiments, the at least one non-transitory computer-readable media stores instructions which, when executed, cause the processor to perform operations including receiving a first set of sensor data within a first time frame and receiving a set of skycam actions within the first time frame. In certain embodiments, the operations also include generating a set of reference actions corresponding to the first set of sensor data and the set of skycam actions. In some embodiments, the operations also include receiving a second set of sensor data associated with a second game status, a second game measurement, or both. The operations also include generating a sequence of skycam actions based on a comparison between the second set of sensor data and the set of reference actions.
-
Video Coding Using Object Metadata
Filed US 62/908,983
-
Video Quality Measurement for Virtual Cameras in Volumetric Immersive Media
Filed 20210097667
Apparatus and method for determining a quality score for virtual video cameras. For example, one embodiment comprises: a region of interest (ROI) detector to detect regions of interest within a first image generated from a first physical camera (PCAM) positioned at first coordinates; virtual camera circuitry and/or logic to generate a second image positioned at the first coordinates; image comparison circuitry and/or logic to establish pixel-to-pixel correspondence between the first image and…
Apparatus and method for determining a quality score for virtual video cameras. For example, one embodiment comprises: a region of interest (ROI) detector to detect regions of interest within a first image generated from a first physical camera (PCAM) positioned at first coordinates; virtual camera circuitry and/or logic to generate a second image positioned at the first coordinates; image comparison circuitry and/or logic to establish pixel-to-pixel correspondence between the first image and the second image; an image quality evaluator to determine a quality value for the second image by evaluating the second image in view of the first image.
-
SYSTEM TO COMPENSATE FOR VISUAL IMPAIRMENT
Issued US 10,062,353
This disclosure is directed to a system to compensate for visual impairment. The system may comprise, for example, a frame wearable by a user to which is mounted at least sensing circuitry and display circuitry. The sensing circuitry may sense at least visible data and depth data. Control circuitry may then cause the display circuitry to visibly present the depth to the user based on the visible data and depth data. For example, the display circuitry may present visible indicia indicating depth…
This disclosure is directed to a system to compensate for visual impairment. The system may comprise, for example, a frame wearable by a user to which is mounted at least sensing circuitry and display circuitry. The sensing circuitry may sense at least visible data and depth data. Control circuitry may then cause the display circuitry to visibly present the depth to the user based on the visible data and depth data. For example, the display circuitry may present visible indicia indicating depth to appear superimposed on the field of view to indicate different depths in the field of view, or may alter the appearance of objects in the field of view based on the depth of each object. The system may also be capable of sensing a particular trigger event, and in response may initiate sensing and presentation for a peripheral field of view of the user.
Other inventorsSee patent -
Wearable device command regulation
Issued US 10039077
(2nd granted patent for the application.) Systems and methods for regulating alerts in a wearable device are disclosed. The alerts may be generated from a mobile device or a wearable device communicatively coupled to the mobile device. The system may include an alert storage module that receives alerts of various types, and generate a plurality of alert heaps each including respective one or more alerts. The system may determine for an alert a respective cost value associated with issuing a…
(2nd granted patent for the application.) Systems and methods for regulating alerts in a wearable device are disclosed. The alerts may be generated from a mobile device or a wearable device communicatively coupled to the mobile device. The system may include an alert storage module that receives alerts of various types, and generate a plurality of alert heaps each including respective one or more alerts. The system may determine for an alert a respective cost value associated with issuing a notification of the alert. The alert heaps may be merged to produce a cost-biased leftist heap including prioritized alerts based on the cost values of the alerts. The system may generate a queue of notification commands based on the prioritized alerts, and transmit the commands to the wearable device.
Other inventorsSee patent -
Devices and Methods to Compress Sensor Data
Issued US 9986069
Devices and methods to compress sensor data are generally described herein. An exemplary wearable device to compress sensor data may include a sensor including circuitry to sense sensor data, and a communication circuit to receive, from a remote device, a detected link quality of a low-power communication channel used to communicate with the remote device. The communication circuit further to transmit compressed data to a remote device over the low power communication channel. The wearable…
Devices and methods to compress sensor data are generally described herein. An exemplary wearable device to compress sensor data may include a sensor including circuitry to sense sensor data, and a communication circuit to receive, from a remote device, a detected link quality of a low-power communication channel used to communicate with the remote device. The communication circuit further to transmit compressed data to a remote device over the low power communication channel. The wearable device may further include a compressible sensor data module to apply a compression algorithm to compress received sensor data based on the detected link quality to provide the compressed sensor data.
-
PREDICTIVE SCREEN DISPLAY METHOD AND APPARATUS
Issued US 9959839B2
Apparatuses, methods and storage media associated with display of visual assets on a device are described. Specifically, the device may include a display screen. The device may further include a visual asset scheduler. The visual asset scheduler may include a screen predictor, a queue, and a visual asset loader. Other embodiments may be described and/or claimed.
Other inventorsSee patent -
USER INTERACTION WITH WEARABLE DEVICES
Issued US 9,952,660
Particular embodiments described herein provide for an electronic device that can be configured to determine that an unobtrusive gesture has been received on a first electronic device and send a signal to a second electronic device in response to the unobtrusive gesture. The first electronic device can also be configured to receive a signal from the second electronic device, determine an unobtrusive output in response to the signal, and generate an unobtrusive notification in response to the…
Particular embodiments described herein provide for an electronic device that can be configured to determine that an unobtrusive gesture has been received on a first electronic device and send a signal to a second electronic device in response to the unobtrusive gesture. The first electronic device can also be configured to receive a signal from the second electronic device, determine an unobtrusive output in response to the signal, and generate an unobtrusive notification in response to the received signal. In an example, the first electronic device is a part of jewelry worn by a user.
-
IMAGE PROCESSOR FOR WEARABLE DEVICE
Issued US 9881405B2
Solutions for producing an image on an irregular surface are described. A graphical object is identified from an image to be displayed on the irregular surface. Objects according to at least one shape function are distorted to compensate for irregularities in the irregular surface. Previously-distorted instances of objects may be added to a distortion-compensated image.
-
COLLABORATIVE TRANSMISSION MANAGEMENT FOR SMART DEVICES
Issued US 9838970
One embodiment relates to an apparatus, comprising logic, at least partially incorporated
into hardware, to determine whether a first device priority associated with a first smart device is greater than a second device priority associated with a second smart device; and responsive to a determination that the first device priority is greater than the second device priority: send first data associated with the first smart device from the first smart device to a primary communication device;…One embodiment relates to an apparatus, comprising logic, at least partially incorporated
into hardware, to determine whether a first device priority associated with a first smart device is greater than a second device priority associated with a second smart device; and responsive to a determination that the first device priority is greater than the second device priority: send first data associated with the first smart device from the first smart device to a primary communication device; and send a first message from the first smart device , the first message including a first indication that the second smart device is to transmit second data associated with the second smart device to the primary communication device.Other inventorsSee patent -
Collaborative transmission management for smart devices
Issued US 10200955
One embodiment relates to an apparatus, comprising logic, at least partially incorporated into hardware, to determine whether a first device priority associated with a first smart device is greater than a second device priority associated with a second smart device; and responsive to a determination that the first device priority is greater than the second device priority: send first data associated with the first smart device from the first smart device to a primary communication device; and…
One embodiment relates to an apparatus, comprising logic, at least partially incorporated into hardware, to determine whether a first device priority associated with a first smart device is greater than a second device priority associated with a second smart device; and responsive to a determination that the first device priority is greater than the second device priority: send first data associated with the first smart device from the first smart device to a primary communication device; and send a first message from the first smart device, the first message including a first indication that the second smart device is to transmit second data associated with the second smart device to the primary communication device.
-
WEARABLE DEVICE COMMAND REGULATION
Issued US 9622180
(1st granted patent for the application.) Systems and methods for regulating alerts in a wearable device are disclosed. The alerts may be generated from a mobile device or a wearable device communicatively coupled to the mobile device. The system may include an alert storage module that receives alerts of various types, and generate a plurality of alert heaps each including respective one or more alerts. The system may determine for an alert a respective cost value associated with issuing a…
(1st granted patent for the application.) Systems and methods for regulating alerts in a wearable device are disclosed. The alerts may be generated from a mobile device or a wearable device communicatively coupled to the mobile device. The system may include an alert storage module that receives alerts of various types, and generate a plurality of alert heaps each including respective one or more alerts. The system may determine for an alert a respective cost value associated with issuing a notification of the alert. The alert heaps may be merged to produce a cost-biased leftist heap including prioritized alerts based on the cost values of the alerts. The system may generate a queue of notification commands based on the prioritized alerts, and transmit the commands to the wearable device.
Other inventorsSee patent -
Position detection of a wearable heart rate monitoring device
Filed US WO2016153723A1
Technology for detecting whether a wearable device is misaligned is disclosed. The device can include a number of sensors, such as heart rate, temperature, or other sensors used to sense a physiologic aspect of the user, such as heart rate and can further contain components capable of providing data as to the proper alignment or placement of the wearable device on the user. The wearable device may communicate with a computing device, such as a mobile device which can receive data from the…
Technology for detecting whether a wearable device is misaligned is disclosed. The device can include a number of sensors, such as heart rate, temperature, or other sensors used to sense a physiologic aspect of the user, such as heart rate and can further contain components capable of providing data as to the proper alignment or placement of the wearable device on the user. The wearable device may communicate with a computing device, such as a mobile device which can receive data from the wearable device and output notifications to the user, including notifications about proper or improper placement or alignment of the wearable device.
-
Devices and methods to compress sensor data
Issued US 9986069
Devices and methods to compress sensor data are generally described herein. An exemplary wearable device to compress sensor data may include a sensor including circuitry to sense sensor data, and a communication circuit to receive, from a remote device, a detected link quality of a low-power communication channel used to communicate with the remote device. The communication circuit further to transmit compressed data to a remote device over the low power communication channel. The wearable…
Devices and methods to compress sensor data are generally described herein. An exemplary wearable device to compress sensor data may include a sensor including circuitry to sense sensor data, and a communication circuit to receive, from a remote device, a detected link quality of a low-power communication channel used to communicate with the remote device. The communication circuit further to transmit compressed data to a remote device over the low power communication channel. The wearable device may further include a compressible sensor data module to apply a compression algorithm to compress received sensor data based on the detected link quality to provide the compressed sensor data.
-
SENSOR DATA MANAGEMENT FOR MULTIPLE SMART DEVICES
Filed US 20170289738
One embodiment relates to an apparatus, comprising logic, at least partially
incorporated into hardware, to: receive first sensor data associated with a first sensor of a first smart device; determine a first reliability factor associated with the first sensor data; receive second sensor data associated with a second sensor of a second smart device; and determine a second reliability factor associated with the second sensor data. The logic is further to determine a sensor data reporting plan…One embodiment relates to an apparatus, comprising logic, at least partially
incorporated into hardware, to: receive first sensor data associated with a first sensor of a first smart device; determine a first reliability factor associated with the first sensor data; receive second sensor data associated with a second sensor of a second smart device; and determine a second reliability factor associated with the second sensor data. The logic is further to determine a sensor data reporting plan based upon the first reliability factor and the second reliability factor, the sensor data reporting plan indicating whether each of the first sensor and the second sensor are to subsequently send their respective sensor data to a primary communication device. -
UPDATE FAILURE REBOOTING AND RECOVERY FOR A SMART DEVICE
Issued 10810084
One embodiment relates to an apparatus, comprising logic, at least partially incorporated into hardware, to receive, by a primary communication device, an update image associated with a smart device, and initiate sending of the update image to the smart device, wherein a bootloader of the smart device is configured to update a memory of the smart device with the update image. The logic is further to determine whether the updating of the memory of the smart device with the update image has been…
One embodiment relates to an apparatus, comprising logic, at least partially incorporated into hardware, to receive, by a primary communication device, an update image associated with a smart device, and initiate sending of the update image to the smart device, wherein a bootloader of the smart device is configured to update a memory of the smart device with the update image. The logic is further to determine whether the updating of the memory of the smart device with the update image has been interrupted, and responsive to determining that the updating of the memory of the smart device with the update image has been interrupted, send a first message to the smart device to instruct the bootloader of the smart device to resume updating of the memory of the smart device.
-
EFFICIENT STORAGE AND RETRIEVAL FOR WEARABLE-DEVICE DATA
Filed US 20170090814
Technology described herein provides methods whereby a two-level indexing/hashing structure is used to efficiently coordinate storage of sensor measurements between local digital memory (e.g., at a mobile device) and remote digital memory (e.g., at a cloud storage system). The first level of the two-level indexing/hashing structure may be include an array of first-level nodes that are sorted according to priority values. The priority values may be determined based on user data-querying…
Technology described herein provides methods whereby a two-level indexing/hashing structure is used to efficiently coordinate storage of sensor measurements between local digital memory (e.g., at a mobile device) and remote digital memory (e.g., at a cloud storage system). The first level of the two-level indexing/hashing structure may be include an array of first-level nodes that are sorted according to priority values. The priority values may be determined based on user data-querying activity. The second level of the two-level indexing/hashing structure may include second-level hash tables wherein buckets are associated with memory blocks of a predefined size. Sensor measurements that were taken during a specific time period may be stored near each other in memory and may be downloaded for local storage if user activity suggests that the user frequently has interest in data from that time period.
-
MISALIGNMENT DETECTION OF A WEARABLE DEVICE
Filed US 20160278647
Technology for a mobile device operable to detect whether a wearable device is misaligned is disclosed. The mobile device can receive acceleration data from the wearable device. The acceleration data can include a first acceleration vector at a first time instance for the wearable device and a second acceleration vector at a second time instance for the wearable device. The mobile device can calculate a change in magnitude between the first acceleration vector and the second acceleration…
Technology for a mobile device operable to detect whether a wearable device is misaligned is disclosed. The mobile device can receive acceleration data from the wearable device. The acceleration data can include a first acceleration vector at a first time instance for the wearable device and a second acceleration vector at a second time instance for the wearable device. The mobile device can calculate a change in magnitude between the first acceleration vector and the second acceleration vector. The mobile device can calculate a misalignment vector as a difference between the first acceleration vector and the second acceleration vector. The mobile device can provide the change in magnitude and the misalignment vector to a classifier. The classifier can compare the change in magnitude and the misalignment vector to historical data to determine whether the wearable device is currently worn and misaligned with a body feature or surface.
-
Systems and Methods for Trailer Sway Monitoring and Mitigation
Filed 63/719,059
To be added
-
Regulating Communication Between a Vehicle and a User Device
Filed 20240172309
Systems and methods are provided for initiating first instructions to establish a communication session between a user device and a vehicle, wherein the first instructions include a wait time interval and the communication session is associated with a short-range wireless communication protocol. The system and methods may determine a number of unsuccessful attempts for establishing the communication session exceeds a threshold value, and in response to determining that the number exceeds the…
Systems and methods are provided for initiating first instructions to establish a communication session between a user device and a vehicle, wherein the first instructions include a wait time interval and the communication session is associated with a short-range wireless communication protocol. The system and methods may determine a number of unsuccessful attempts for establishing the communication session exceeds a threshold value, and in response to determining that the number exceeds the threshold value, generate second instructions to establish the communication session, wherein the second instructions include a modification to the wait time interval. The second instructions may be initiated to establish the communication session between the user device and the vehicle.
-
DISTANCE MODELING FOR VEHICLE PASSIVE ENTRY
Filed 20240051497
Systems and methods are provided for providing passive entry to a vehicle to a user with an authorized mobile device. Responsive to receiving, at a vehicle and from an application executing on a mobile device associated with a user, one or more signals associated with the mobile device, a signal strength and contextual information associated with the mobile device is determined based on the one or more signals. A passive entry feature of the vehicle is initiated based on the signal strength and…
Systems and methods are provided for providing passive entry to a vehicle to a user with an authorized mobile device. Responsive to receiving, at a vehicle and from an application executing on a mobile device associated with a user, one or more signals associated with the mobile device, a signal strength and contextual information associated with the mobile device is determined based on the one or more signals. A passive entry feature of the vehicle is initiated based on the signal strength and the contextual information.
-
Managing Communication between a User Device and a Vehicle
Filed 20230114701
Systems and methods are provided for managing communications between a vehicle and a user device. A user device may be determined to be authorized to communicate with a vehicle. A signal strength of a signal transmitted between the vehicle and the user device is determined, and a determination is made whether the signal strength exceeds a threshold signal strength. In response to determining that the signal strength exceeds the threshold signal strength, a communication command is enabled to be…
Systems and methods are provided for managing communications between a vehicle and a user device. A user device may be determined to be authorized to communicate with a vehicle. A signal strength of a signal transmitted between the vehicle and the user device is determined, and a determination is made whether the signal strength exceeds a threshold signal strength. In response to determining that the signal strength exceeds the threshold signal strength, a communication command is enabled to be transmitted from the user device to the vehicle.
-
MANAGING COMMUNICATION BETWEEN A USER DEVICE AND A VEHICLE
Filed TBD
Provisional patent
-
Object-Based Volumetric Video Coding
Filed 20220262041
Methods, apparatus, systems and articles of manufacture for object-based volumetric video coding are disclosed. An example apparatus disclosed herein includes a point annotator to receive point cloud data associated with an object and annotate points of the point cloud data with an object identifier of the object. The disclosed example apparatus also includes a projector to project the point cloud data onto projection planes to produce texture images and geometry images. The disclosed example…
Methods, apparatus, systems and articles of manufacture for object-based volumetric video coding are disclosed. An example apparatus disclosed herein includes a point annotator to receive point cloud data associated with an object and annotate points of the point cloud data with an object identifier of the object. The disclosed example apparatus also includes a projector to project the point cloud data onto projection planes to produce texture images and geometry images. The disclosed example apparatus further includes a patch generator to generate a patch based on the object identifier, the patch including the texture images and the geometry images of the object, the patch associated with the object identifier of the object. The disclosed example apparatus also includes an atlas generator to generate an atlas to include in encoded video data, the atlas including the patch.
-
Immersive Video Coding Using Object Metadata
Filed 20230007277
Methods, apparatus, systems and articles of manufacture for video coding using object metadata are disclosed. An example apparatus includes an object separator to separate input views into layers associated with respective objects to generate object layers for geometry data and texture data of the input views, a pruner to project the first object layer of a first basic view of the at least one basic views against the first object layer of a first additional view of the at least one additional…
Methods, apparatus, systems and articles of manufacture for video coding using object metadata are disclosed. An example apparatus includes an object separator to separate input views into layers associated with respective objects to generate object layers for geometry data and texture data of the input views, a pruner to project the first object layer of a first basic view of the at least one basic views against the first object layer of a first additional view of the at least one additional views to generate a first pruned view and a first pruning mask, a patch packer to tag a patch with an object identifier of the first object, the patch corresponding to the first pruning mask, and an atlas generator to generate at least one atlas to include in encoded video data, the atlas including the patch.
-
Systems and Methods for Virtual Camera Configuration
Filed US pending
-
Pictorial Processor for Portable Device
Filed DE112016006020T5
Solutions for producing an image on an irregular surface are described. A graphic object is identified from an image to be displayed on the irregular surface. Objects according to at least one shape function are distorted to compensate for irregularities in the irregular area. Previously distorted object instances can be added to a distortion-compensated image.
-
DEVICES AND METHODS TO COMPRESS SENSOR DATA
US 0
-
VIEWING ANGLE DEPENDENT OPTIMIZATION FOR GLASSES
US 0
Projects
-
Server-side Web Development with Swift Using Vapor and Kitura
-
This is my code repository for Hands-On Server-Side Web Development with Swift, published by Packt. Each project is written in Swift for both Vapor and Kitura frameworks. The code repository covers the following features:
(1) Build simple web apps using Vapor 3.0 and Kitura 2.5,
(2) Test, debug, build, and release server-side Swift applications,
(3) Design routes and controllers for custom client requests,
(4) Work with server-side template engines,
(5) Deploy web apps to a…This is my code repository for Hands-On Server-Side Web Development with Swift, published by Packt. Each project is written in Swift for both Vapor and Kitura frameworks. The code repository covers the following features:
(1) Build simple web apps using Vapor 3.0 and Kitura 2.5,
(2) Test, debug, build, and release server-side Swift applications,
(3) Design routes and controllers for custom client requests,
(4) Work with server-side template engines,
(5) Deploy web apps to a host in the cloud, and more.
Recommendations received
5 people have recommended Angus
Join now to viewMore activity by Angus
-
Hiring now https://lnkd.in/gfi7Qejz
Hiring now https://lnkd.in/gfi7Qejz
Shared by Angus Yeung
-
Defining the software that defines modern vehicles takes talented and passionate individuals. Are you one of them? https://lnkd.in/gBvbkBjX
Defining the software that defines modern vehicles takes talented and passionate individuals. Are you one of them? https://lnkd.in/gBvbkBjX
Liked by Angus Yeung
-
We just announced that Amazon has rolled out 20K custom electric delivery vans across the U.S., and I’m really excited about this milestone. I…
We just announced that Amazon has rolled out 20K custom electric delivery vans across the U.S., and I’m really excited about this milestone. I…
Liked by Angus Yeung
-
Less than 2 wks since Rivian closed JV with VW Group, we’re continuing the momentum with conditional commitment from US Dept of Energy loan program…
Less than 2 wks since Rivian closed JV with VW Group, we’re continuing the momentum with conditional commitment from US Dept of Energy loan program…
Liked by Angus Yeung
-
Rivian and Volkswagen Group have announced the launch of a new joint venture, Rivian and VW Group Technology, LLC, to enhance the development…
Rivian and Volkswagen Group have announced the launch of a new joint venture, Rivian and VW Group Technology, LLC, to enhance the development…
Liked by Angus Yeung
-
https://lnkd.in/gWGFZczH
https://lnkd.in/gWGFZczH
Liked by Angus Yeung
-
Last night, Cornell Tech’s 2024 Runway and Spinouts teams presented their work and pitched their companies and missions to a panel of esteemed…
Last night, Cornell Tech’s 2024 Runway and Spinouts teams presented their work and pitched their companies and missions to a panel of esteemed…
Liked by Angus Yeung
-
Yesterday was a big day for Quintar, Inc at the #svgnext event. We publicly showcased Quintar’s Spatial Stream and Spatial Sync technologies for the…
Yesterday was a big day for Quintar, Inc at the #svgnext event. We publicly showcased Quintar’s Spatial Stream and Spatial Sync technologies for the…
Liked by Angus Yeung
-
🚀 We're Hiring Talents with LLM/VLM Expertise! Excited to announce opportunities at Rivian and Volkswagen Group Technologies for individuals…
🚀 We're Hiring Talents with LLM/VLM Expertise! Excited to announce opportunities at Rivian and Volkswagen Group Technologies for individuals…
Liked by Angus Yeung
-
The Rivian and Volkswagen Group Technologies Joint Venture combines Rivian's clean-sheet software stack and electrical architecture with Volkswagen…
The Rivian and Volkswagen Group Technologies Joint Venture combines Rivian's clean-sheet software stack and electrical architecture with Volkswagen…
Liked by Angus Yeung
-
I am extremely humbled to receive the 2024 ACM SIGMM Outstanding Technical Achievement Award at MM2024 held last week in Melbourne, Australia. I have…
I am extremely humbled to receive the 2024 ACM SIGMM Outstanding Technical Achievement Award at MM2024 held last week in Melbourne, Australia. I have…
Liked by Angus Yeung
-
Meet Strahinja Stefanovic, our new Senior ML Engineer who joined us on an innovative biotech project! ⚙️✨ Strahinja enjoys a morning walk 🚶♂️…
Meet Strahinja Stefanovic, our new Senior ML Engineer who joined us on an innovative biotech project! ⚙️✨ Strahinja enjoys a morning walk 🚶♂️…
Liked by Angus Yeung
-
After 33 incredible years at Intel, I retired yesterday, still a young at heart engineer. Thanks to the many talented and good folks who crossed my…
After 33 incredible years at Intel, I retired yesterday, still a young at heart engineer. Thanks to the many talented and good folks who crossed my…
Liked by Angus Yeung
Other similar profiles
Explore collaborative articles
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
Explore More