Prosecution Insights
Last updated: April 19, 2026
Application No. 18/715,958

HEAD MOUNTED DISPLAY, HEAD MOUNTED DISPLAY SYSTEM, AND METHOD OF DISPLAYING HEAD MOUNTED DISPLAY

Non-Final OA §103§112
Filed
Jun 03, 2024
Examiner
GUO, XILIN
Art Unit
2616
Tech Center
2600 — Communications
Assignee
Maxell, Ltd.
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
374 granted / 456 resolved
+20.0% vs TC avg
Strong +17% interview lift
Without
With
+17.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
18 currently pending
Career history
474
Total Applications
across all art units

Statute-Specific Performance

§101
7.6%
-32.4% vs TC avg
§103
56.3%
+16.3% vs TC avg
§102
12.8%
-27.2% vs TC avg
§112
19.0%
-21.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 456 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Preliminary Amendment The preliminary amendment filed on June 03, 2024 has been entered. In view of the amendment to the specification, paragraph of “Cross-Reference to Related Application” has been added. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 6 and 13 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for pre-AIA the inventor(s), at the time the application was filed, had possession of the claimed invention. Dependent claims 6 and 13 recite the limitation of "stores an identification number information of the wireless communication apparatus, the controller, estimates the distance to the wireless communication apparatus from received signal strength or communication delay time of the wireless communication interface, extracts the wireless communication apparatus that matches the type condition and distance condition based on the identification number information as object, superimposes virtual image object of showing extracted object on the image of the virtual space and displays on the display". However, the limitations of “the wireless communication apparatus”, “estimates the distance to the wireless communication apparatus from received signal strength” and “extracts the wireless communication apparatus that matches the type condition and distance condition based on the identification number information as object” recited in claims 6 and 13 were not described in the specification of the present invention. Accordingly, the specification does not enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make the invention commensurate in scope with limitations recited in the claims. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 6 and 13 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Dependent claim 6 depends upon independent claim 1 and dependent claim 13 depends upon independent claim 8. Each claim recites the limitation of "stores an identification number information of the wireless communication apparatus, the controller, estimates the distance to the wireless communication apparatus from received signal strength or communication delay time of the wireless communication interface, extracts the wireless communication apparatus that matches the type condition and distance condition based on the identification number information as object, superimposes virtual image object of showing extracted object on the image of the virtual space and displays on the display". However, each claim does not describe “the wireless communication apparatus”. It renders each claim indefinite. As discussed above, the specification of the present invention does not describe the limitations of “the wireless communication apparatus”, “estimates the distance to the wireless communication apparatus from received signal strength” and “extracts the wireless communication apparatus that matches the type condition and distance condition based on the identification number information as object” recited in claims 6 and 13. Therefore, the examiner deems the claims indefinite as they fail to particularly point out and distinctly claim what Applicant regards as the invention. Accordingly, the claims are rejected under U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 8-11 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Min (U.S. Patent Application Publication 2019/0139307 A1) in view of Lu (U.S. Patent Application Publication 2013/0176337 A1). Regarding claim 1, Min discloses a head mounted display for virtual space (FIG. 1; paragraph [0016], virtual reality can be viewed as a computer-generated simulated environment in which a user has an apparent physical presence. Accordingly, when implemented as a virtual reality device, simulated reality device 102 provides the user with an environment that can viewed with a head-mounted display, such as glasses or other wearable display device that has near-eye display panels as lenses, to display a virtual reality environment that visually replaces a user's view of the actual environment) comprising: a display that displays image (Paragraph [0018], display device 104 represents a display device that displays images and/or information to the user), a camera that captures real space (Paragraph [0036], sensors 108 can include a dual camera system that uses image captures ...), a distance detector that detects distance to object in real space (Paragraph [0021], sensors 108 can be used to determine whether an object resides within a predetermined perimeter around, and/or distance from, simulated reality device 102), an image generator that generates image to be displayed on the display (Paragraph [0019], content generation module 106 can generate images projected onto a surface as further described herein. In one or more implementations, content generation module 106 drives display device 104 with computer-generated graphics, such as a computer-generate scene corresponding to a virtual reality experience), a controller (Paragraph [0023], FIG. 2 illustrates an expanded view of simulated reality device 102 of FIG. 1 ... Simulated reality device 102 includes processor(s) 202 ...), wherein, the controller, recognizes the type of object from the captured image of the camera (FIGS. 6 and 7a; paragraph [0044], virtual reality display 604 may engross user 602 so much that the user fails to see approaching person 606 ...; paragraph [0046], a captured image 702 of person 606 ... the simulated reality device identifies a shape of the detected object ...), extracts object (Paragraph [0046], simulated reality device 102 extracts the shape of person 606 from environment 600 to generate captured image 702) based on the detected distance (Paragraph [0021], sensors 108 can be used to determine whether an object resides within a predetermined perimeter around, and/or distance from, simulated reality device 102) and the type of object (Paragraph [0046], a captured image 702 of person 606), superimposes image of showing extracted object on the image of the virtual space and displays on the display (Paragraph [0046], simulated reality device 102 overlays a captured image of the detected object in its environment and/or with the corresponding background objects (e.g., person 606 and images of the corresponding background). Sometimes the positioning of captured image 702 overlaid on virtual reality display 604 can reflect a real world position of the detected object). However, Min does not specifically disclose a memory that stores the type condition and distance condition of the object to be displayed, and the detected distance and the type of object that matches the type condition and distance condition. In additional, Lu discloses (Abstract, a device and method for information processing are described. The device includes a display unit having a preset transmittance; an object determination unit configured to determine at least one object at the information processing device side ...) a memory that stores the type condition and distance condition of the object (FIG. 2; paragraph [0038], the object determining module 223 can implement the determination process by searching objects satisfying predetermined condition (e.g., distance, the object type, etc.) in the map data stored in the storage device (not shown) of the information processing device 2) to be displayed (FIG. 4; paragraphs [0060]-[0061], similar to the description for FIG. 1, the camera module 121 of the object determining unit 12 captures the object on one side of the information processing device 1 (i.e., the side towards the object) ... after the position and orientation of the information processing device 2 is determined, the visual range (i.e., viewing angle) of the scene (such as buildings, landscapes, etc.) seen by the user through the display screen 21 of the information processing device 2 can be determined by using trigonometric functions based on the distance from the user's head to the display screen 21 and the size of the display screen 21. Then the object determining module 223 can determine at least one object within the visual range based on a predetermined condition. Here, for example, the predetermined condition can be an object within one kilometer to the information processing device 2, or an object of a certain type in the visual range (e.g., a building) etc..); the detected distance and the type of object that matches the type condition and distance condition (Paragraph [0038], the predetermined condition can be an object within one kilometer to the information processing device 2, or an object of a certain type in the visual range (e.g., a building) etc. Here, the object determining module 223 can implement the determination process by searching objects satisfying predetermined condition (e.g., distance, the object type, etc.) in the map data stored in the storage device). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the simulated reality system taught by Min incorporate the teachings of Lu, and applying the method for information processing taught by Lu to provide the predetermined type and distance condition of object stored in the memory and allow the system to detect the object in the captured image by searching objects satisfying predetermined condition. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify Min according to the relied-upon teachings of Lu to obtain the invention as specified in claim. Regarding claim 2, the combination of Min in view of Lu discloses everything claimed as applied above (see claim 1), and Min further disclose wherein, the image showing extracted object, is an image in which the object portion is cut out from the image captured by the camera (FIG. 7a; paragraph [0046], simulated reality device 102 extracts the shape of person 606 from environment 600 to generate captured image 702), or an image in which the outline of the object is extracted from the image captured by the camera. Regarding claim 3, the combination of Min in view of Lu discloses everything claimed as applied above (see claim 1), and Min further disclose wherein, the image showing extracted object is a virtual object showing the type of the object camera (FIG. 7a; paragraph [0046], simulated reality device 102 extracts the shape of person 606 from environment 600 to generate captured image 702). Regarding claim 4, the combination of Min in view of Lu discloses everything claimed as applied above (see claim 1), and Min further disclose wherein, the controller (Paragraph [0023], FIG. 2 illustrates an expanded view of simulated reality device 102 of FIG. 1 ... Simulated reality device 102 includes processor(s) 202 ...), as object matches the type condition and distance condition (see claim 1), extracts object (Paragraph [0046], simulated reality device 102 extracts the shape of person 606 from environment 600 to generate captured image 702) based on the distance (Paragraph [0021], sensors 108 can be used to determine whether an object resides within a predetermined perimeter around, and/or distance from, simulated reality device 102). However, Min does not specifically disclose the memory, stores as conditions, a first distance at which an object is displayed regardless of the type condition, and a second distance at which an object that matches the type condition is displayed, the controller, as object matches the type condition and distance condition, object that matches condition of the second distance, and regardless of the type condition, object that matches condition of the first distance. In additional, Lu discloses the memory, stores as conditions (FIG. 2; paragraph [0038], predetermined condition (e.g., distance, the object type, etc.) in the map data stored in the storage device), a first distance (Paragraph [0038], the object determining module 223 can implement the determination process by searching objects satisfying predetermined condition (e.g., distance) at which an object is displayed regardless of the type condition (Paragraph [0038], determining the visual range (i.e., viewing angle) of the scene (e.g., buildings, ... object type is a building), and a second distance (Paragraph [0038], the object determining module 223 can implement the determination process by searching objects satisfying predetermined condition (e.g., distance) at which an object that matches the type condition is displayed (Paragraph [0038], determining the visual range (i.e., viewing angle) of the scene (e.g., landscapes, ... object type is a landscape), the controller, as object matches the type condition and distance condition, object that matches condition of the second distance (Paragraph [0038], the predetermined condition can be an object within one kilometer to the information processing device 2), and regardless of the type condition, object that matches condition of the first distance (Paragraph [0038], an object of a certain type in the visual range (e.g., a building)). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the simulated reality system taught by Min incorporate the teachings of Lu, and applying the method for information processing taught by Lu to provide the predetermined type and distance condition of object stored in the memory and allow the system to detect the object in the captured image by searching objects satisfying predetermined condition. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify Min according to the relied-upon teachings of Lu to obtain the invention as specified in claim. Regarding claim 8, Min discloses a head mounted display system including a camera that captures real space (FIG. 1; paragraph [0036], sensors 108 can include a dual camera system that uses image captures ...), and a head mounted display for virtual space (Paragraph [0016], virtual reality can be viewed as a computer-generated simulated environment in which a user has an apparent physical presence. Accordingly, when implemented as a virtual reality device, simulated reality device 102 provides the user with an environment that can viewed with a head-mounted display, such as glasses or other wearable display device that has near-eye display panels as lenses, to display a virtual reality environment that visually replaces a user's view of the actual environment), wherein, the head mounted display, includes, a display that displays image (Paragraph [0018], display device 104 represents a display device that displays images and/or information to the user), a distance detector that detects distance to object in real space (Paragraph [0021], sensors 108 can be used to determine whether an object resides within a predetermined perimeter around, and/or distance from, simulated reality device 102), an image generator that generates image to be displayed on the display (Paragraph [0019], content generation module 106 can generate images projected onto a surface as further described herein. In one or more implementations, content generation module 106 drives display device 104 with computer-generated graphics, such as a computer-generate scene corresponding to a virtual reality experience), a controller (Paragraph [0023], FIG. 2 illustrates an expanded view of simulated reality device 102 of FIG. 1 ... Simulated reality device 102 includes processor(s) 202 ...), wherein, the controller, recognizes the type of object from the captured image of the camera (FIGS. 6 and 7a; paragraph [0044], virtual reality display 604 may engross user 602 so much that the user fails to see approaching person 606 ...; paragraph [0046], a captured image 702 of person 606 ... the simulated reality device identifies a shape of the detected object ...), extracts object (Paragraph [0046], simulated reality device 102 extracts the shape of person 606 from environment 600 to generate captured image 702) based on the detected distance (Paragraph [0021], sensors 108 can be used to determine whether an object resides within a predetermined perimeter around, and/or distance from, simulated reality device 102) and the type of object (Paragraph [0046], a captured image 702 of person 606), superimposes image of showing extracted object on the image of the virtual space and displays on the display (Paragraph [0046], simulated reality device 102 overlays a captured image of the detected object in its environment and/or with the corresponding background objects (e.g., person 606 and images of the corresponding background). Sometimes the positioning of captured image 702 overlaid on virtual reality display 604 can reflect a real world position of the detected object). However, Min does not specifically disclose a memory that stores the type condition and distance condition of the object to be displayed, and the detected distance and the type of object that matches the type condition and distance condition. In additional, Lu discloses (Abstract, a device and method for information processing are described. The device includes a display unit having a preset transmittance; an object determination unit configured to determine at least one object at the information processing device side ...) a memory that stores the type condition and distance condition of the object (FIG. 2; paragraph [0038], the object determining module 223 can implement the determination process by searching objects satisfying predetermined condition (e.g., distance, the object type, etc.) in the map data stored in the storage device (not shown) of the information processing device 2) to be displayed (FIG. 4; paragraphs [0060]-[0061], similar to the description for FIG. 1, the camera module 121 of the object determining unit 12 captures the object on one side of the information processing device 1 (i.e., the side towards the object) ... after the position and orientation of the information processing device 2 is determined, the visual range (i.e., viewing angle) of the scene (such as buildings, landscapes, etc.) seen by the user through the display screen 21 of the information processing device 2 can be determined by using trigonometric functions based on the distance from the user's head to the display screen 21 and the size of the display screen 21. Then the object determining module 223 can determine at least one object within the visual range based on a predetermined condition. Here, for example, the predetermined condition can be an object within one kilometer to the information processing device 2, or an object of a certain type in the visual range (e.g., a building) etc..), and the detected distance and the type of object that matches the type condition and distance condition (Paragraph [0038], the predetermined condition can be an object within one kilometer to the information processing device 2, or an object of a certain type in the visual range (e.g., a building) etc. Here, the object determining module 223 can implement the determination process by searching objects satisfying predetermined condition (e.g., distance, the object type, etc.) in the map data stored in the storage device). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the simulated reality system taught by Min incorporate the teachings of Lu, and applying the method for information processing taught by Lu to provide the predetermined type and distance condition of object stored in the memory and allow the system to detect the object in the captured image by searching objects satisfying predetermined condition. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify Min according to the relied-upon teachings of Lu to obtain the invention as specified in claim. Regarding claim 9, the combination of Min in view of Lu discloses everything claimed as applied above (see claim 8), and Min further disclose wherein, the image showing extracted object, is an image in which the object portion is cut out from the image captured by the camera (FIG. 7a; paragraph [0046], simulated reality device 102 extracts the shape of person 606 from environment 600 to generate captured image 702), or an image in which the outline of the object is extracted from the image captured by the camera. Regarding claim 10, the combination of Min in view of Lu discloses everything claimed as applied above (see claim 8), and Min further disclose wherein, the image showing extracted object is a virtual object showing the type of the object (FIG. 7a; paragraph [0046], simulated reality device 102 extracts the shape of person 606 from environment 600 to generate captured image 702). Regarding claim 11, the combination of Min in view of Lu discloses everything claimed as applied above (see claim 8), and Min further disclose wherein, the controller (Paragraph [0023], FIG. 2 illustrates an expanded view of simulated reality device 102 of FIG. 1 ... Simulated reality device 102 includes processor(s) 202 ...), as object that matches the type condition and distance condition (see claim 8), extracts object (Paragraph [0046], simulated reality device 102 extracts the shape of person 606 from environment 600 to generate captured image 702) based on the distance (Paragraph [0021], sensors 108 can be used to determine whether an object resides within a predetermined perimeter around, and/or distance from, simulated reality device 102). However, Min does not specifically disclose the memory, stores as conditions, a first distance at which an object is displayed regardless of the type condition, and a second distance at which an object that matches the type condition is displayed, the controller, as object that matches the type condition and distance condition, object that matches condition of the second distance, and regardless of the type condition, object that matches condition of the first distance. In additional, Lu discloses the memory, stores as conditions (FIG. 2; paragraph [0038], predetermined condition (e.g., distance, the object type, etc.) in the map data stored in the storage device), a first distance (Paragraph [0038], the object determining module 223 can implement the determination process by searching objects satisfying predetermined condition (e.g., distance) at which an object is displayed regardless of the type condition (Paragraph [0038], determining the visual range (i.e., viewing angle) of the scene (e.g., buildings, ... object type is a building), and a second distance (Paragraph [0038], the object determining module 223 can implement the determination process by searching objects satisfying predetermined condition (e.g., distance) at which an object that matches the type condition is displayed (Paragraph [0038], determining the visual range (i.e., viewing angle) of the scene (e.g., landscapes, ... object type is a landscape), the controller, as object that matches the type condition and distance condition, object that matches condition of the second distance (Paragraph [0038], the predetermined condition can be an object within one kilometer to the information processing device 2), and regardless of the type condition, object that matches condition of the first distance (Paragraph [0038], an object of a certain type in the visual range (e.g., a building)). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the simulated reality system taught by Min incorporate the teachings of Lu, and applying the method for information processing taught by Lu to provide the predetermined type and distance condition of object stored in the memory and allow the system to detect the object in the captured image by searching objects satisfying predetermined condition. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify Min according to the relied-upon teachings of Lu to obtain the invention as specified in claim. Regarding claim 15, Min discloses a method of displaying head mounted display performed using a head mounted display for virtual space (FIG. 1; paragraph [0016], virtual reality can be viewed as a computer-generated simulated environment in which a user has an apparent physical presence. Accordingly, when implemented as a virtual reality device, simulated reality device 102 provides the user with an environment that can viewed with a head-mounted display, such as glasses or other wearable display device that has near-eye display panels as lenses, to display a virtual reality environment that visually replaces a user's view of the actual environment) comprising: an image generation step generates an image drawing the virtual space (Paragraph [0019], content generation module 106 can generate images projected onto a surface as further described herein. In one or more implementations, content generation module 106 drives display device 104 with computer-generated graphics, such as a computer-generate scene corresponding to a virtual reality experience), a shooting step captures the real space around the head mounted display (Paragraph [0036], sensors 108 can include a dual camera system that uses image captures ...), a distance detection step detects the distance to object in the real space (Paragraph [0021], sensors 108 can be used to determine whether an object resides within a predetermined perimeter around, and/or distance from, simulated reality device 102), a recognition step recognizes the type of object from the captured image (FIGS. 6 and 7a; paragraph [0044], virtual reality display 604 may engross user 602 so much that the user fails to see approaching person 606 ...; paragraph [0046], a captured image 702 of person 606 ... the simulated reality device identifies a shape of the detected object ...), an extraction step extracts object (Paragraph [0046], simulated reality device 102 extracts the shape of person 606 from environment 600 to generate captured image 702) based on the detected distance (Paragraph [0021], sensors 108 can be used to determine whether an object resides within a predetermined perimeter around, and/or distance from, simulated reality device 102) and the type of object (Paragraph [0046], a captured image 702 of person 606), a superimposed display step superimposes image of showing extracted object on the image of the virtual space and displays (Paragraph [0046], simulated reality device 102 overlays a captured image of the detected object in its environment and/or with the corresponding background objects (e.g., person 606 and images of the corresponding background). Sometimes the positioning of captured image 702 overlaid on virtual reality display 604 can reflect a real world position of the detected object). However, Min does not specifically disclose a memory step stores the type condition and distance condition of the object to be displayed, the detected distance and the type of object that matches the type condition and distance condition, from the recognized object. In additional, Lu discloses (Abstract, a device and method for information processing are described. The device includes a display unit having a preset transmittance; an object determination unit configured to determine at least one object at the information processing device side ...) a memory step stores the type condition and distance condition of the object (FIG. 2; paragraph [0038], the object determining module 223 can implement the determination process by searching objects satisfying predetermined condition (e.g., distance, the object type, etc.) in the map data stored in the storage device (not shown) of the information processing device 2) to be displayed (FIG. 4; paragraphs [0060]-[0061], similar to the description for FIG. 1, the camera module 121 of the object determining unit 12 captures the object on one side of the information processing device 1 (i.e., the side towards the object) ... after the position and orientation of the information processing device 2 is determined, the visual range (i.e., viewing angle) of the scene (such as buildings, landscapes, etc.) seen by the user through the display screen 21 of the information processing device 2 can be determined by using trigonometric functions based on the distance from the user's head to the display screen 21 and the size of the display screen 21. Then the object determining module 223 can determine at least one object within the visual range based on a predetermined condition. Here, for example, the predetermined condition can be an object within one kilometer to the information processing device 2, or an object of a certain type in the visual range (e.g., a building) etc..), the detected distance and the type of object that matches the type condition and distance condition, from the recognized object (Paragraph [0038], the predetermined condition can be an object within one kilometer to the information processing device 2, or an object of a certain type in the visual range (e.g., a building) etc. Here, the object determining module 223 can implement the determination process by searching objects satisfying predetermined condition (e.g., distance, the object type, etc.) in the map data stored in the storage device). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the simulated reality system taught by Min incorporate the teachings of Lu, and applying the method for information processing taught by Lu to provide the predetermined type and distance condition of object stored in the memory and allow the system to detect the object in the captured image by searching objects satisfying predetermined condition. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify Min according to the relied-upon teachings of Lu to obtain the invention as specified in claim. Claims 5 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Min (U.S. Patent Application Publication 2019/0139307 A1) in view of Lu (U.S. Patent Application Publication 2013/0176337 A1) in view of Lovitt (U.S. Patent Application Publication 2021/0405959 A1). Regarding claim 5, the combination of Min in view of Lu discloses everything claimed as applied above (see claim 1), and Min further disclose wherein, the distance detector, includes a microphone to detect real object (FIG. 1; paragraph [0021], Sensors 108 represent sensors used by simulated reality device 102 to detect the presence of a real world object. For example, sensors 108 can include microphone(s)), extracts object that matches the type condition and distance condition (see claim 1), superimpose image of showing extracted object on the image of the virtual space and displays on the display (see claim 1). However, Min does not specifically disclose a microphone that collects ambient sound, a sound processing apparatus that creates data of the ambient sound image for use in determining the type and identifying the position of the sound source, the controller, recognizes the type of sound source from the data. In additional, Lovitt discloses a microphone (Paragraph [0042], a physical environment includes physical objects located in a physical space detected by a camera, microphone ...; paragraph [0083], As shown in FIG. 4B, for instance, the user 112a can wear the augmented-reality-computing device 106a. In one or more embodiments, as discussed above, the augmented-reality-computing device 106 may include microphones) that collects ambient sound (Paragraph [0046], an audio stream captured by a microphone), a sound processing apparatus that creates data of the ambient sound image for use in determining the type and identifying the position of the sound source (Paragraph [0084], the augmented reality system 102 can further identify and classify the physical object 404. For example, the augmented reality system 102 can analyze an image frame from the image stream captured by the augmented-reality-computing device 106a to determine that the physical object 404 is a smart speaker utilizing a wireless protocol. Based on identifying the physical object 404 as a smart speaker, the augmented reality system 102 can further utilize web lookups, database lookups, and other info to determine features and characteristics associated with the physical object 404), the controller, recognizes the type of sound source from the data (Paragraph [0084], the augmented reality system 102 can determine that the physical object 404 can play audio based on data transmitted via a wireless protocol and the physical object 404 has a particular size). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the simulated reality system taught by Min in view of Lu incorporate the teachings of Lovitt, and applying the systems for detecting that a physical space includes a physical object taught by Lovitt to use the microphone collecting ambient sound and determine the type of sound source from the collected data. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify Min in view of Lu according to the relied-upon teachings of Lovitt to obtain the invention as specified in claim. Regarding claim 12, the combination of Min in view of Lu discloses everything claimed as applied above (see claim 8), and Min further disclose wherein, the distance detector, includes a microphone to detect real object (FIG. 1; paragraph [0021], Sensors 108 represent sensors used by simulated reality device 102 to detect the presence of a real world object. For example, sensors 108 can include microphone(s)), extracts object that matches the type condition and distance condition (see claim 8), superimpose image of showing extracted object on the image of the virtual space and displays on the display (see claim 8). However, Min does not specifically disclose a microphone that collects ambient sound, a sound processing apparatus that creates data of the ambient sound image for use in determining the type and identifying the position of the sound source, the controller, recognizes the type of sound source from the data. In additional, Lovitt discloses a microphone (Paragraph [0042], a physical environment includes physical objects located in a physical space detected by a camera, microphone ...; paragraph [0083], As shown in FIG. 4B, for instance, the user 112a can wear the augmented-reality-computing device 106a. In one or more embodiments, as discussed above, the augmented-reality-computing device 106 may include microphones) that collects ambient sound (Paragraph [0046], an audio stream captured by a microphone), a sound processing apparatus that creates data of the ambient sound image for use in determining the type and identifying the position of the sound source (Paragraph [0084], the augmented reality system 102 can further identify and classify the physical object 404. For example, the augmented reality system 102 can analyze an image frame from the image stream captured by the augmented-reality-computing device 106a to determine that the physical object 404 is a smart speaker utilizing a wireless protocol. Based on identifying the physical object 404 as a smart speaker, the augmented reality system 102 can further utilize web lookups, database lookups, and other info to determine features and characteristics associated with the physical object 404), the controller, recognizes the type of sound source from the data (Paragraph [0084], the augmented reality system 102 can determine that the physical object 404 can play audio based on data transmitted via a wireless protocol and the physical object 404 has a particular size). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the simulated reality system taught by Min in view of Lu incorporate the teachings of Lovitt, and applying the systems for detecting that a physical space includes a physical object taught by Lovitt to use the microphone collecting ambient sound and determine the type of sound source from the collected data. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify Min in view of Lu according to the relied-upon teachings of Lovitt to obtain the invention as specified in claim. Claims 7 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Min (U.S. Patent Application Publication 2019/0139307 A1) in view of Lu (U.S. Patent Application Publication 2013/0176337 A1) in view of O'Malley (U.S. Patent No. 11,526,721 B1). Regarding claim 7, the combination of Min in view of Lu discloses everything claimed as applied above (see claim 1). However, Min does not specifically disclose wherein, the memory, stores information indicating the type of object not to be displayed, the controller, does not perform to display the object identified from the information. In additional, O'Malley discloses (Abstract, a vehicle can capture data that can be converted into a synthetic scenario for use in a simulator. Objects can be identified in the data and attribute data associated with the objects can be determined. Updated attribute data may be determined based on confidence values and/or distance measurements associated with the attribute data. The object and attribute data may be used to generate synthetic scenarios of a simulated environment, including simulated objects that traverse the environment and perform actions based on the attribute data associated with the simulated objects, the captured data, and/or interactions within the simulated environment. The scenarios can be used for testing and validating interactions and responses of a vehicle controller within the simulated environment) wherein, the memory (Col 33, lines 25-31, FIG. 7 depicts a block diagram of an example system 700 for implementing the techniques discussed herein. In at least one example, the example system 700 can include a vehicle 702, which can be similar to the vehicle(s) 104 described above with reference to FIG. 1. In the illustrated example system 700, the vehicle 702 is an autonomous vehicle; however, the vehicle 702 can be any other type of vehicle; Col 38, lines 13-31, the vehicle 702 can connect to computing device(s) 732 via network(s) 716 and can include one or more processor(s) 734 and memory 736 ...), stores information (Col 33, lines 25-31, the memory 736 of the computing device(s) 732 stores a log data component 738, an objects component 740, an attributes component 742, a triggering component 744, a scenario component 746, and a simulation component 748) indicating the type of object not to be displayed (Col 41, 31-60, the scenario component 746 can use filters to remove objects represented in the log data from a simulated scenario based on attributes associated with the objects. In some instances, the scenario component 746 can filter objects based on an object/classification type (car, pedestrian, motorcycle, bicyclist, etc.), an object size (e.g., length, width, height, and/or volume), a confidence level, track length, an amount of interaction between the object and a vehicle generating the log data, and/or a time period ... The scenario component 746 can use a volume-based filter such that objects that are associated with a volume greater equal to or greater than a threshold volume of three cubic meters, such as buildings, are represented in the simulated scenario and objects that are associated with a volume less than three cubic meters are not represented in the simulated scenario, such as the mailboxes ...), the controller (Col 38, lines 13-31, the vehicle 702 can connect to computing device(s) 732 via network(s) 716 and can include one or more processor(s) 734), does not perform to display the object identified from the information (Col 42, lines 7-37, the simulation component 748 can execute the simulated scenario as a set of simulation instructions and generate simulation data. In some instances, the simulation component 748 can execute multiple simulated scenarios simultaneously and/or in parallel. This can allow a user to edit a simulated scenario and execute permutations of the simulated scenario with variations between each simulated scenario ... The simulation component 748 generate the simulation data indicating how the autonomous controller performed (e.g., responded) and can compare the simulation data to a predetermined outcome and/or determine if any predetermined rules/assertions were broken/triggered ... the predetermined rules/assertions can be based on the simulated scenario (e.g., traffic rules regarding crosswalks can be enabled based on a crosswalk scenario or traffic rules regarding crossing a lane marker can be disabled for a stalled vehicle scenario). In some instances, the simulation component 748 can enable and disable rules/assertions dynamically as the simulation progresses. For example, as a simulated object approaches a school zone, rules/assertions related to school zones can be enabled and disabled as the simulated object departs from the school zone. In some instances, the rules/assertions can include comfort metrics that relate to, for example, how quickly an object can accelerate given the simulated scenario). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the simulated reality system taught by Min in view of Lu incorporate the teachings of O'Malley, and applying the process for determining updated attribute data values using log data taught by O'Malley to provide the predetermined rules for filtering objects based on an object type and control objects that are not represented in the display based on the filter condition. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify Min in view of Lu according to the relied-upon teachings of O'Malley to obtain the invention as specified in claim. Regarding claim 14, the combination of Min in view of Lu discloses everything claimed as applied above (see claim 8). However, Min does not specifically disclose wherein, the memory, stores information indicating the type of object not to be displayed, the controller, does not perform to display the object identified from the information. In additional, O'Malley discloses (Abstract, a vehicle can capture data that can be converted into a synthetic scenario for use in a simulator. Objects can be identified in the data and attribute data associated with the objects can be determined. Updated attribute data may be determined based on confidence values and/or distance measurements associated with the attribute data. The object and attribute data may be used to generate synthetic scenarios of a simulated environment, including simulated objects that traverse the environment and perform actions based on the attribute data associated with the simulated objects, the captured data, and/or interactions within the simulated environment. The scenarios can be used for testing and validating interactions and responses of a vehicle controller within the simulated environment) wherein, the memory (Col 33, lines 25-31, FIG. 7 depicts a block diagram of an example system 700 for implementing the techniques discussed herein. In at least one example, the example system 700 can include a vehicle 702, which can be similar to the vehicle(s) 104 described above with reference to FIG. 1. In the illustrated example system 700, the vehicle 702 is an autonomous vehicle; however, the vehicle 702 can be any other type of vehicle; Col 38, lines 13-31, the vehicle 702 can connect to computing device(s) 732 via network(s) 716 and can include one or more processor(s) 734 and memory 736 ...), stores information (Col 33, lines 25-31, the memory 736 of the computing device(s) 732 stores a log data component 738, an objects component 740, an attributes component 742, a triggering component 744, a scenario component 746, and a simulation component 748) indicating the type of object not to be displayed Col 41, 31-60, the scenario component 746 can use filters to remove objects represented in the log data from a simulated scenario based on attributes associated with the objects. In some instances, the scenario component 746 can filter objects based on an object/classification type (car, pedestrian, motorcycle, bicyclist, etc.), an object size (e.g., length, width, height, and/or volume), a confidence level, track length, an amount of interaction between the object and a vehicle generating the log data, and/or a time period ... The scenario component 746 can use a volume-based filter such that objects that are associated with a volume greater equal to or greater than a threshold volume of three cubic meters, such as buildings, are represented in the simulated scenario and objects that are associated with a volume less than three cubic meters are not represented in the simulated scenario, such as the mailboxes ...), the controller (Col 38, lines 13-31, the vehicle 702 can connect to computing device(s) 732 via network(s) 716 and can include one or more processor(s) 734), does not perform to display the object identified from the information (Col 42, lines 7-37, the simulation component 748 can execute the simulated scenario as a set of simulation instructions and generate simulation data. In some instances, the simulation component 748 can execute multiple simulated scenarios simultaneously and/or in parallel. This can allow a user to edit a simulated scenario and execute permutations of the simulated scenario with variations between each simulated scenario ... The simulation component 748 generate the simulation data indicating how the autonomous controller performed (e.g., responded) and can compare the simulation data to a predetermined outcome and/or determine if any predetermined rules/assertions were broken/triggered ... the predetermined rules/assertions can be based on the simulated scenario (e.g., traffic rules regarding crosswalks can be enabled based on a crosswalk scenario or traffic rules regarding crossing a lane marker can be disabled for a stalled vehicle scenario). In some instances, the simulation component 748 can enable and disable rules/assertions dynamically as the simulation progresses. For example, as a simulated object approaches a school zone, rules/assertions related to school zones can be enabled and disabled as the simulated object departs from the school zone. In some instances, the rules/assertions can include comfort metrics that relate to, for example, how quickly an object can accelerate given the simulated scenario). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the simulated reality system taught by Min in view of Lu incorporate the teachings of O'Malley, and applying the process for determining updated attribute data values using log data taught by O'Malley to provide the predetermined rules for filtering objects based on an object type and control objects that are not represented in the display based on the filter condition. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify Min in view of Lu according to the relied-upon teachings of O'Malley to obtain the invention as specified in claim. Examiner’s Comment Claims 6 and 13 have not art rejection but rejected under U.S.C. 112 rejections. A final determination of patentability, after further search, will be mode upon resolution of above 35 U.S.C. 112 rejection. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Xilin Guo whose telephone number is (571)272-5786. The examiner can normally be reached Monday - Friday 9:00 AM-5:30 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Hajnik can be reached at 571-272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /XILIN GUO/Primary Examiner, Art Unit 2616
Read full office action

Prosecution Timeline

Jun 03, 2024
Application Filed
Jan 16, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602855
LIVE MODEL PROMPTING AND REAL-TIME OUTPUT OF PHOTOREAL SYNTHETIC CONTENT
2y 5m to grant Granted Apr 14, 2026
Patent 12597403
DISPLAY DEVICE FOR A VEHICLE
2y 5m to grant Granted Apr 07, 2026
Patent 12579712
ASSET CREATION USING GENERATIVE ARTIFICIAL INTELLIGENCE
2y 5m to grant Granted Mar 17, 2026
Patent 12579766
SYSTEM AND METHOD FOR RAPID OUTFIT VISUALIZATION
2y 5m to grant Granted Mar 17, 2026
Patent 12573121
Automated Generation and Presentation of Sign Language Avatars for Video Content
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+17.4%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 456 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month