DETAILED ACTION
This Office Action is in response to the Amendment filed on November 11th, 2025.
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
In the instant Amendment, claims 4, 7, 9-13, 16-18 & 20-23 have been amended; claims 4, 18 & 22 are independent; claims 1-3, 5, 8, 14-15 & 19 were canceled; and claims 24-28 have been added. Claims 4, 6-7, 9-13, 16-18 & 20-28 have been examined and are pending. This Action is made FINAL.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments, see pages 6-9, filed 11/11/2025, with respect to the rejection(s) of claim(s) 1-20 under 35 U.S.C. § 102(a)(2) have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Novak.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 4, 6-7, 9-13, 16-18 & 20-28 are rejected under 35 U.S.C. 103 as being unpatentable over Gupta et al. (Gupta), U.S. Pub. Number 2022/0358727, in view of Novak et al. (Novak), U.S. Pub. Number 2017/0103582.
Regarding claim 4; Gupta discloses a method, the method comprising:
obtaining image data from a real-world environment (par. 0126; fig. 6; capture images of the real0world environment.);
determining that one or more real-world objects in the image data from the real-world environment are private user objects, and identifying representations of the private user objects as private scene components (par. 0181; privacy settings associated with an object may specify any suitable granularity of permitted access or denial of access; access or denial of access may be specified for particular users (e.g., only me, my roommates, my boss), users within a particular degree-of-separation (e.g., friends, friends-of-friends), user groups (e.g., the gaming club, my family), user networks (e.g., employees of particular employers, students or alumni of particular university), all users (“public”), no users (“private”), users of third-party systems 170, particular applications (e.g., third-party applications, external websites).);
replacing, the private scene components with generic scene components (par. 0081; the generic entity resolution 342 may resolve the entities by categorizing the slots and meta slots into different generic topics.), wherein each generic scene component has dimensions determined to correspond to the corresponding replaced private scene component (pars. 0183 & 0187; different objects of the same type associated with a user may have different privacy settings; different types of objects associated with a user may have different types of privacy settings; a first user may specify that the first user’s status updates are public, but any images shared by the first user are visible only to the first user’s friends on the online social network; the first user may then update the privacy settings to allow location information to be used by a third-party image-sharing application in order to geotag photos.); and
exposing the generic scene components to an application on an artificial reality (XR) device (par. 0148; exposing the AR content and anchor library to developers may enable developers to remotely deploy their effects onto broad categories of anchors without the developers having to visit all those locations in person.); and
causing modification of XR content presented by the application, based, a least in part, on the generic scene components (par. 0076; pick up previous conversation threads at any point in the future, synthesize all signals to understand micro and personalized context, learn interaction patterns and preferences from user’s historical behavior and accurately suggest interactions that they may value, generate highly predictive proactive suggestions based on micro-context understanding, understand what content a user may want to see at what time of a day, and understand the changes in a scene and how that may impact the user’s desired content.).
Gupta fails to explicitly disclose the one or more real-world objects are determined to be private user objects by identifying that the one or more real-world objects meet one or more privacy criteria;
the one or more real-world objects are determined to be private user objects (Novak: par. 0076; a particular object located within the first environment is identified; identification of the particular object may be performed via image processing techniques such as object recognition techniques, facial recognition techniques, or pattern matching techniques.) by identifying that the one or more real-world objects meet one or more privacy criteria (Novak: par. 0031; the consumer of the virtual object (e.g., a second person wearing a second HMD receiving information associated with the virtual object) may filter or restrict the display of the virtual object if the computing device from which the virtual object is generated does not meet certain criteria (e.g., is associated with an HMD that is not classified as becoming to a “fiend”.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Novak into the systems and methods of Gupta comprising the one or more real-world objects are determined to be private user objects by identifying that the one or more real-world objects meet one or more privacy criteria to improve object detection and tracking (Novak: par. 0063).
Regarding claim 6; Gupta and Novak disclose the method of claim 4, wherein Gupta further discloses at least one of the generic scene components includes a geometric shape made up of multiple geometric shapes that are oriented based on a structure a corresponding private user object (Gupta: par. 0050; the non-speech data may be subject to geometric constructions, which may comprise constructing objects surrounding a user using any suitable type of data collected by a client system 130; for instance, a user may be wearing AR glasses, and geometric constructions may be utilized to determine spatial locations of surfaces and items (e.g., a floor, a wall, a user’s hands); the non-speech data may be inertial data captured by AR glasses or a VR headset, and which may be data associated with linear and angular motions (e.g., measurements associated with a user’s body movements).).
Regarding claim 7; Gupta and Novak disclose the method of claim 4, wherein Gupta further discloses the obtaining the image data is performed using sensing devices of the XR device including one or more cameras and one or more depth sensors (Gupta: par. 0149; AR content may be placed on features from both of these dataset so as to sense and maintain a single datastore with duplicate features removed; AR content and anchor library may be ready for when 3D maps scale and the AR content and anchor library is updated to add public indoor features; for instance, to answer questions like “what’s at this location?” the AR content and anchor library may maintain a comprehensive segmentation model that divides the world’s surface (initially in 2D but eventually in 3D) into different objects and those objects should be linked to entities within the database.).
Regarding claim 9; Gupta and Novak disclose the method of claim 4, wherein Gupta further discloses each respective real-world object is associated with semantic information that defines an object type and object use for the real-world object (Gupta: par. 0073; the capability of visual cognition may enable the assistant system 140 to recognize interesting objects in the world through a combination of existing machine-learning models and one-shot learning, recognize an interesting moment and auto-capture it, achieve semantic understanding over multiple visual frames across different episodes of time, provide platform support for additional capabilities in places or objects recognition, recognize a full set of settings and micro-locations including personized locations, recognize complex activities, recognize complex gestures to control a client system 130, handle images/videos from egocentric cameras (e.g., with motion, capture angles, resolution), accomplish similar levels of accuracy and speed regarding images with lower resolution, conduct one-shot registration and recognition of places and objects, and/or perform visual recognition on a client system 130.).
Regarding claim 10; Gupta and Novak disclose the method of claim 9, wherein Gupta further discloses at least part of the semantic information for at least one of the one or more real-world objects is manually defined by a user of the XR device (Gupta: par. 0093; generate the contextual updates for a conversion with the user; a slot resolution component may then recursively resolve the slots in the update operators with resolution providers including the knowledge graph and domain agents.).
Regarding claim 11; Gupta and Novak disclose the method of claim 9, wherein Gupta further discloses at least part of the semantic information for the one or more real-world objects is defined by deriving the semantic information from one or more machine learning models detecting object types based on images of the real-world environment (Gupta: par. 0079; a slot may be a named sub-string corresponding to a character string within the user input representing a basic semantic entity; for instance, a slot for “pizza” may be [SL:dish]; a set of valid or expected named slot may be conditioned on the classified intent.).
Regarding claim 12; Gupta and Novak disclose the method of claim 9, wherein Gupta further discloses at least part of the semantic information for the one or more real-world objects specifies the object use as a use that was mapped to the corresponding object type (Gupta: par. 0079; the NLU module 210 may process the domain classification/selection results using a meta-intent classifier 336a; the meta-intent classifier 336a may determine categories that describe the user’s intent; an intent may be an element in a pre-defined taxonomy of semantic intentions, which may indicate a purpose of a user interaction with the assistant system 140.).
Regarding claim 13; Gupta and Novak disclose the method of claim 9, wherein Gupta further discloses identifying that the one or more real-world objects meet the one or more privacy criteria includes comparing the semantic information with the one or more privacy criteria (Gupta: par. 0080; the NLU module 210 may comprise one or more programs that perform naive semantics or stochastic semantic analysis, and may further use pragmatics to understand a user input.).
Regarding claim 16; Gupta and Novak disclose the method of claim 4, wherein Gupta further discloses identifying that the one or more real-world objects meet the one or more privacy criteria includes determining that a respective real-world object matches a selection, by a user of the XR device, of an object as private (Gupta: par. 0139; artificial reality experience may include one or more predefined locations, places, communities, video gaming levels (e.g., total space available for users completing a predetermined obstacle or objection), multiplayer gaming competitions (e.g., matches), or gaming modes (e.g., single-player mode, multiplayer mode, skill level, and so forth) for user join-ups (e.g., “Artificial Reality Experience 1”), 802B (e.g., “Artificial Reality Experience 2”), 802C (e.g., “Artificial Reality Experience 2”), 802C (e.g., “Artificial Reality Experience 3”), and 802D (“Artificial Reality Experience N”).).
Regarding claim 17; Gupta and Novak disclose the method of claim 4, wherein Gupta further discloses identifying that the one or more real-world objects meet the one or more privacy criteria includes a machine learning model identifying that a real-world object includes alphanumeric symbols or a photograph (Gupta: par. 0122; artificial reality content may include completely generated content or generated content combined with captured content (e.g., real-world photographs).).
Regarding claim 18; Claim 18 is directed to a computer-readable storage medium which has similar scope as claim 4. Therefore, claim 18 remains un-patentable for the same reasons.
Regarding claims 20-21; Claims 20-21 are directed to the computer-readable storage medium of claim 18 which have similar scope as claims 11 & 13. Therefore, claims 20-21 remain un-patentable for the same reasons.
Regarding claim 22; Claim 22 is directed to a computing system which has similar scope as claim 4. Therefore, claim 22 remains un-patentable for the same reasons.
Regarding claim 23; Claim 23 is directed to the computing system of claim 22 which has similar scope as claim 17. Therefore, claim 23 remains un-patentable for the same reasons.
Regarding claim 24; Claim 24 is directed to the computer-readable storage medium of claim 18 which has similar scope as claim 17. Therefore, claim 24 remains un-patentable for the same reasons.
Regarding claim 25; Claim 25 is directed to the computing system of claim 22 which has similar scope as claim 12. Therefore, claim 25 remains un-patentable for the same reasons.
Regarding claim 26; Claim 26 is directed to the computing system of claim 22 which has similar scope as claim 13. Therefore, claim 26 remains un-patentable for the same reasons.
Regarding claim 27; Claim 27 is directed to the computer-readable storage medium of claim 18 which has similar scope as claim 16. Therefore, claim 27 remains un-patentable for the same reasons.
Regarding claim 27; Claim 27 is directed to the computing system of claim 22 which has similar scope as claim 16. Therefore, claim 28 remains un-patentable for the same reasons.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KHOI V LE whose telephone number is (571)270-5087. The examiner can normally be reached on 9:00 AM - 5:00 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Shewaye Gelagay can be reached on 571-272-4219. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KHOI V LE/
Primary Examiner, Art Unit 2436