Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 2/3/2026 has been entered.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 7-8, 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Mallinson (Pub. No. US20180095542) in view of Cartwright (US Patent 10,600,246)
Regarding claim 1, Mallinson discloses a processor-implemented method for physical object tracking in virtual reality (Fig. 7A, 710), the method comprising: receiving input from a user identifying a specific physical object during a virtual reality session in a virtual reality environment (para [0062] “The user may select any one of the objects in the list for user interaction. The selection may be done, for example, using buttons or input options on a controller 106 or via voice command or using input options on the HMD 104 or via eye signals. In this example, user selection may be detected at the HMD 104. In another example, the selection may be done by physically reaching out to the object”); identifying the specific physical object in a physical proximity of the user (para [0087] “one or more cameras on the HMD 104 may be used to determine the user's proximity in relation to the obstacle and provide the halo image at the display screen of the HMD”); tracking the location of the specific physical object relative to the user (para [0058] “The external camera 108 may be a depth camera and the images of the real-world objects captured may provide depth, so that the location and orientation of the real-world objects may be easily determined”); and responsive to receiving a command from the user, displaying the specific physical object to the user through a portal in the virtual reality environment (para [0061, “the list of objects from the real-world environment presented in the VR space may include only those objects that are in the line of sight of the user (i.e., field of view of the HMD worn by the user) and may be dynamically updated based on the direction the user is facing in the real-world environment. For example, if the user is facing forward, as illustrated in FIG. 2B, only those objects that are between the user and the external camera 108 are included in the list while the other objects that are behind the user (i.e., drinking cup 229, straw 227, iPad 226) are not included”).
It is noted that Mallinson does not specifically disclose “wherein the portal comprises a flat plane in the virtual reality environment displaying real-time camera footage of a physical world beyond the portal's physical location, dynamically positioned to remain disposed between the user and the specific physical object based on the tracking, and wherein a radius of the portal decreases as a distance between the user and the specific physical object increases”. However, these claimed features are well known in the art as taught by Cartwright. Cartwright discloses displaying the specific physical object to the user through a portal in the virtual reality environment similar to Mallinson (see abstract Cartwright). Cartwright further discloses “wherein the portal comprises a flat plane in the virtual reality environment displaying real-time camera footage of a physical world beyond the portal's physical location, dynamically positioned to remain disposed between the user and the specific physical object based on the tracking, and wherein a radius of the portal decreases as a distance between the user and the specific physical object increases” (col. 7, lines 43-50; “the position of the passthrough portal 234 is fixed relative to the physical environment, while the size of the passthrough portal 234 is relative to the POV of the HMD 201. For example, a center point of the passthrough portal 234 may remain in a fixed position relative to the physical environment while the passthrough portal 234 is scaled relative to a distance from the POV of the HMD 201 to the passthrough portal 234 in the virtual environment 218”). In other words, the user can track the location of the door (236) through the passthrough portal 234 and the passthrough portal 234 is dynamically positioned (e.g. dynamically changed size) to remain disposed on the head-mounted display between the user and the physical object such as door 236 based on the tracking distance of the door. Thus, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to have modified passthrough portal of Mallinson with the size control of the passthrough portal based on the distance of between the user and the physical object as taught by Cartwright so as to provide an attention to the user when the user is approaching to a physical object.
Regarding claims 7, 14, Mallinson discloses wherein the tracking is persistent across multiple virtual reality experiences (para [0061] “the list of objects from the real-world environment presented in the VR space may include only those objects that are in the line of sight of the user (i.e., field of view of the HMD worn by the user) and may be dynamically updated based on the direction the user is facing in the real-world environment”).
Regarding claims 8 and 15, Mallinson further disclose one or more processors (Fig. 8, 1602), one or more computer-readable tangible storage medium (Fig. 8, 1604), and program instructions stored on at least one of the one or more tangible storage medium for execution by at least one of the one or more processors via at least one of the one or more memories (para [0095]), the program instructions executable by a processor to cause the processor to perform a method comprising: receiving input from a user identifying a specific physical object during a virtual reality session in a virtual reality environment (para [0062] “The user may select any one of the objects in the list for user interaction. The selection may be done, for example, using buttons or input options on a controller 106 or via voice command or using input options on the HMD 104 or via eye signals. In this example, user selection may be detected at the HMD 104. In another example, the selection may be done by physically reaching out to the object”); identifying the specific physical object in a physical proximity of the user (para [0087] “one or more cameras on the HMD 104 may be used to determine the user's proximity in relation to the obstacle and provide the halo image at the display screen of the HMD”); tracking the location of the specific physical object relative to the user (para [0058] “The external camera 108 may be a depth camera and the images of the real-world objects captured may provide depth, so that the location and orientation of the real-world objects may be easily determined”); and responsive to receiving a command from the user, displaying the specific physical object to the user through a portal in the virtual reality environment (para [0061, “the list of objects from the real-world environment presented in the VR space may include only those objects that are in the line of sight of the user (i.e., field of view of the HMD worn by the user) and may be dynamically updated based on the direction the user is facing in the real-world environment. For example, if the user is facing forward, as illustrated in FIG. 2B, only those objects that are between the user and the external camera 108 are included in the list while the other objects that are behind the user (i.e., drinking cup 229, straw 227, iPad 226) are not included”).
It is noted that Mallinson does not specifically disclose “wherein the portal comprises a region in the virtual reality environment where the virtual reality environment is not displayed, dynamically positioned to remain, disposed between the user and the specific physical object based on the tracking”. However, Cartwright teaches “wherein the portal comprises a region in the virtual reality environment where the virtual reality environment is not displayed, dynamically positioned to remain, disposed between the user and the specific physical object base on tracking” (see Fig. 6 and col. 6, Lines 50-60; “The passthrough portal 234 may have a video feed 236 of a portion of the imaged physical environment displayed therein. For example, the HMD may image a FOV of the physical environment that corresponds to the FOV of the virtual environment 218 presented to the user, and only the portion of the video feed of the physical environment that corresponds to the location, size, and shape of the passthrough portal 234 may be shown in the video feed 236 of the passthrough portal 234. In the illustrated example, the passthrough portal 234 is positioned and sized to provide a video feed 236 of the door 212 of the user's office”). In other words, the user can track the location of the door (236) through the passthrough portal 234 and the passthrough portal 234 is dynamically positioned (e.g. dynamically changed size) to remain disposed on the head-mounted display between the user and the physical object such as door 236 based on the tracking distance of the door. Therefore, it would have been obvious to one ordinary skill in the art to have modified Mallinson with the features of passthrough portal as taught by Cartwright because Cartwright provide a video feed of the physical object located in front of the user in an virtual environment.
Claims 2-4, 9-11 and 16-18 are rejected under 35 U.S.C. 103 as being unpatentable over Mallinson (Pub. No. US20180095542) in view of Cartwright (US Patent 10,600,246) and further in view of Jeong et al. (Pub. No. US2024/0257542).
Regarding claims 2, 9, 16, Mallinson as modified by Cartwright fails to discloses wherein the identifying further comprises: identifying one or more physical objects of a same type as the specific physical object in the physical proximity to the user using a generic object recognition model; and identifying the specific physical object from among the located physical objects using a specific object recognition model. However, Jeong is cited to teach identifying one or more physical objects of a same type as the specific physical object in the physical proximity to the user using a generic object recognition model; and identifying the specific physical object from among the located physical objects using a specific object recognition model (Jeong: Fig. 6C and para [0136-0137], “The object recognition model 620 can be trained using a plurality of image data including an object and learning data labeled with object identification information on the object included in the image data. For example, FIG. 6C(b) illustrates an embodiment of training an object recognition model 620 using training image data 610a to 610d and training object identification information 631a to 631d labeled on the training image data 610a to 610d”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Mallinson and Cartwright with the features of object recognition model as taught by Jeong so as to recognize an object located in a user’s region of interest and automatically changes photographing conditions without using an identifiable marker.
Regarding claims 3, 10 and 17, Mallinson as modified by Cartwright and Jeong discloses wherein the specific object recognition model is trained using a few-shot image classification process (See Fig. 6c of Jeong).
Regarding claims 4, 11 and 18, Mallinson as modified by Cartwright and Jeong discloses, wherein the specific object recognition model is trained using a method comprising: responsive to prompting the user to record a plurality of images of the specific physical object and a background, receiving the plurality of images; processing the received images into training images; training the specific object recognition model on the training images (Jeong, para [0037] “The object recognition model 620 can be trained using a plurality of image data including an object and learning data labeled with object identification information on the object included in the image data. For example, FIG. 6C(b) illustrates an embodiment of training an object recognition model 620 using training image data 610a to 610d and training object identification information 631a to 631d labeled on the training image data 610a to 610d. Here, the training object identification information 631a to 631d can include at least one of cooking vessels, cooking utensils, food, and a user's hands”).
Claims 5, 12 and 19, are rejected under 35 U.S.C. 103 as being unpatentable over Mallinson (Pub. No. US20180095542) in view of Cartwright (US Patent 10,600,246) and Jeong et al. (Pub. No. US2024/0257542) as applied to claims 4, 11 and 18 above, and further in view of Tu et la. (Pub. No. US 2014/0140610).
Regarding claims 5, 12 and 19, Mallinson as modified fails to disclose wherein the processing further comprises: dividing the received images of the background into a plurality of regions to create a plurality of negative examples; and compositing images of the specific physical object onto the plurality of regions to create a plurality of positive examples, wherein the training images comprise the plurality of negative examples and the plurality of positive examples. However, Tu is cited to teach dividing the received images of the background into a plurality of regions to create a plurality of negative examples; and compositing images of the specific physical object onto the plurality of regions to create a plurality of positive examples, wherein the training images comprise the plurality of negative examples and the plurality of positive examples (Tu, para [0011] “a set of images may be received, and saliency instances may be extracted from the set of images. Top saliency instances extracted from individual image may be labeled as a positive bag, while least saliency instances may be labeled as a negative bag. Both positive and negative bags of the set of images may be collected to train the statistical models using a maximum margin learning algorithm. This algorithm may be implemented to discriminate positive bags (e.g., foreground objects) from negative bags (e.g., background objects) and to maximize differences among the positive bags. The trained statistical models may be then used to discover object classes”). It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to have modified Mallinson as modified with the features of separating objects from background for purpose of image training as taught by Tu so as to increase the effectiveness of object recognition process.
Claims 6, 13 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Mallinson (Pub. No. US20180095542) in view of Cartwright (US Patent 10,600,246) and further in view of Dehlin et al. (Patent No. US 7,519,233).
Regarding claims 6. 13 and 20, Mallinson as modified by Cartwright fails to teach wherein the tracking further comprises; Responsive to determining that the specific physical object has moved, utilizing a moving path prediction algorithm to select the most likely location of the specified physical object based on a movement history of the specific physical object. However, Dehlin is cited to teach wherein the tracking further comprises; Responsive to determining that the specific physical object has moved, utilizing a moving path prediction algorithm to select the most likely location of the specified physical object based on a movement history of the specific physical object (Dehlin, col. 16, Lines 43-57], “an object prediction module predicts the next location of a moving object, such as the user's finger/hand, to detect gestures that include movement. The object prediction module searches for possible re-appearance of the object in a location predicted by a motion model, which examines the last several observations of the object collected before the object disappeared. For example, using a linear motion model, the predicted position is a function of the velocity of the object before it disappeared and how long it has been since the object disappeared. The purpose for employing the object prediction module when performing tracking and detecting the corresponding movement of the object is to enable the path of the object to be accurately reconstructed when identifying a gesture. Linear prediction methods can be applied to assist in achieving this goal”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Mallinson and Cartwright with the features of the location prediction of the moving object as taught by Dehlin because Dehlin can locate the object location even the object has moved out of the field.
Response to Arguments
Applicant's arguments filed 1/2/2026 have been fully considered but they are not persuasive.
Applicant argues that The cited passage of Cartwright, and Cartwright as a whole, at best could be said to teach or suggest decreasing the radius of a portal as a distance between the POV of the HMD and the passthrough portal itself increases; Cartwright nowhere teaches or suggests changing a radius of a portal relative to a distance between a specific physical object and a user. This argument is not persuasive. Cartwright clearly discloses “the position of the passthrough portal 234 is fixed relative to the physical environment, while the size of the passthrough portal 234 is relative to the POV of the HMD 201. For example, a center point of the passthrough portal 234 may remain in a fixed position relative to the physical environment while the passthrough portal 234 is scaled relative to a distance from the POV of the HMD 201 to the passthrough portal 234 in the virtual environment 218”, see col. 7, lines 43-50). In other words, the user can track the distance of the door (236) through the passthrough portal 234 and the passthrough portal 234 is dynamically positioned (e.g. dynamically changed size) to remain disposed on the head-mounted display between the user and the physical object such as door 236 based on the tracking distance of the door.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to XIAO M WU whose telephone number is (571)272-7761. The examiner can normally be reached Monday to Friday 7:30am to 4pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexander Beck can be reached at (571) 272-3750. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/XIAO M WU/
Supervisory Patent Examiner, Art Unit 2613