Prosecution Insights
Last updated: April 19, 2026
Application No. 18/550,753

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM

Non-Final OA §101§103§112
Filed
Sep 15, 2023
Examiner
DRYDEN, EMMA ELIZABETH
Art Unit
2677
Tech Center
2600 — Communications
Assignee
Sony Group Corporation
OA Round
1 (Non-Final)
58%
Grant Probability
Moderate
1-2
OA Rounds
3y 3m
To Grant
83%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
7 granted / 12 resolved
-3.7% vs TC avg
Strong +25% interview lift
Without
With
+25.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
34 currently pending
Career history
46
Total Applications
across all art units

Statute-Specific Performance

§101
9.7%
-30.3% vs TC avg
§103
56.4%
+16.4% vs TC avg
§102
16.6%
-23.4% vs TC avg
§112
13.9%
-26.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 12 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged that application is a National Stage application of PCT/JP2022/006671 dated 02/18/2022. Receipt is acknowledged that application claims priority to foreign application with application number JP2021-053061 dated 03/26/2021. Copies of certified papers required by 37 CFR 1.55 have been received. Priority is acknowledged under 35 USC 119(e) and 37 CFR 1.78. Claims 1-20 have been afforded the benefit of filing date 03/26/2021. Drawings The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they include the following reference character(s) not mentioned in the description: “S145” from FIG. 17. The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they do not include the following reference sign(s) mentioned in the description: G1, G2, P1, and P2. Corrected drawing sheets in compliance with 37 CFR 1.121(d), or amendment to the specification to add the reference character(s) in the description in compliance with 37 CFR 1.121(b) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. The abstract of the disclosure is objected to because it is a copy of claim 1, and thus includes legal phraseology. A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b). Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: “acquiring section” in claims 1 and 17. “calibration processing section” in claims 1-2, 7-8, 11, 13-15, and 17. “determining section” in claims 3-4. “presentation processing section” in claim 16. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof (for example, para 46). If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 5-6, 9, and 17-18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The phrase “is assumed to be” makes it unclear whether the details following the phrase are required by the claim or not. For example, in claim 9, it is unclear whether the first parameter is required to be “a parameter for identifying a positional relation between the first camera and the display section”. For examination purposes, all claims will be interpreted to include the limitations following the phrase “is assumed to be”. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 20 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim does not fall within at least one of the four categories of patent eligible subject matter because claim 20 is directed to a signal per se. Claim 20 explicitly recites “A storage medium being read by a computer and having stored thereon a program that causes a computation processing apparatus to execute functions of…” Thus, a computer readable medium is actually claimed. The broadest reasonable interpretation of machine-readable media can encompass non-statutory transitory forms of signal transmission, such as a propagating electrical or electromagnetic signal per se (MPEP 2106.03). The specification as filed does not limit the definition of computer readable storage medium to non-transitory mediums. Since a transitory signal, while physical and real, does not possess concrete structure that would qualify as a device or part under the definition of a machine, it is not a tangible article or commodity under the definition of a manufacture (even though it is man-made and physical in that it exists in the real world and has tangible causes and effects), and is not composed of matter such that it would qualify as a composition of matter (MPEP 2106.03). Thus, claim 20 is non-statutory under 35 U.S.C. §101. The examiner suggests amending claim 20 to recite: “A non-transitory storage medium storing a program, the program when read by a computer causes a computation processing apparatus to execute functions of…” Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 4-14 17, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Katz et al. (U.S. Patent No. 2015/0193983 A1), hereinafter Katz, in view of Southworth et al. (U.S. Patent No. 2020/0342832 A1), hereinafter Southworth. Regarding claim 1, Katz teaches an information processing apparatus (Katz, para 16: “virtual reality (VR) system environment 100”) comprising: an acquiring section (Katz, estimation module, para 41: “estimation module 330 receives slow calibration data and/or fast calibration data”) that acquires first estimation information representing a position and a posture of a display section (Katz, para 23: “fast calibration data indicating an estimated position of the VR headset 105… measure rotational motion (e.g., pitch, yaw, roll)…The reference point is a point that may be used to describe the position of the VR headset 105. While the reference point may generally be defined as a point in space; however, in practice the reference point is defined as a point within the VR headset 105”; display 115 is part of the VR headset, see para 19); and second estimation information representing a position and a posture of the display section (Katz, position of the locators which are on the VR headset, see next citation; para 4: “locators included on each of the rigid bodies for tracking the user's head position and orientation”, orientation of the user’s head conveys the orientation of the head-mounted display, also known as the VR headset) that are estimated on a basis of a second captured image captured with a second camera installed in a space where the user exists (Katz, para 25: “The imaging device 135 generates slow calibration data in accordance with calibration parameters received from the VR console 110. Slow calibration data includes one or more images showing observed positions of the locators 120 that are detectable by the imaging device 135. The imaging device 135 may include one or more cameras”; locators are on the VR headset, thus the imaging device must be in the space where the user is to track them); and a calibration processing section (Katz, para 45: “parameter adjustment module 340”) that generates correction information (Katz, para 25: “calibration parameters”) used for calibration of a parameter of either the first camera or the second camera (Katz, either the first or second camera, see VR headset and imaging device calibration parameters in para 39) on a basis of the first estimation information and the second estimation information (Katz, para 59: “VR console 110 further adjusts 480 one or more calibration parameters until intermediate estimated positions of the VR headset 105 received from the fast calibration data are within a threshold distance of predicted positions for the VR headset 105 or the reference point 215, where the predicted positions are determined from the calibrated positions of the reference point 215 associated with various images from the slow calibration data”, slow data corresponds to the second estimation information and fast data corresponds to the first estimation information, refer to citations used above). Katz fails to teach wherein the first estimation information is estimated on a basis of a first captured image captured with a first camera worn by a user along with the display section. However, Southworth teaches a similar system (Southworth, Tracking position of HMD and objects using HMD headset and another imaging sensor, see claims 1-2 and FIG 4), comprising first estimation information (Southworth, position and orientation of the HMD, para 18: “an HMD 110 may include one or more of: a sensor to capture information for determining position or orientation of the HMD 110 in physical space (e.g., in two or three dimensions)”) estimated on a basis of a first captured image captured with a first camera worn by a user along with the display section (Southworth, location of HMD determined based on image data captured by the HMD camera, para 36: “The sticker system 100 receives 410 an image of a sticker 160 captured by an imaging sensor (e.g., of a HMD 110 or computing device 120)”; para 37: “The sticker system 100 determines 430 location information of a HMD 110 worn by a user 105. The imaging sensor may be coupled to the HMD 110”). Katz discloses estimating the position and posture of a display section of a head mounted display using data collected by sensors of the headset worn by a user along with the display section (Katz, IMU, gyroscope, etc. of the headset, see para 23 citation above). Southworth teaches estimating the position and posture of a display section of a head mounted display using image data captured with a camera worn by a user along with the display section (Southworth, camera coupled to the headset, see citations above). Thus, both Katz and Southworth disclose a system/method for estimating the position and posture of a head mounted display using sensor components of the display headset itself. A person of ordinary skill in the art, before the effective filing date of the claimed invention, would have recognized that the first estimation information of Katz could have been substituted for the first estimation information of Southworth because both serve the purpose of estimating information about the head mounted display to be compared with that of an external camera. Furthermore, a person of ordinary skill in the art would have been able to carry out the substitution. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to substitute the first estimation information of Katz for the first estimation information based on image data from a camera coupled to the headset, disclosed by Southworth, according to known methods to yield the predictable result of improving the accuracy of the position estimation data by using image data of objects in the environment instead of IMU data which may be vulnerable to drift errors, thus improving the accuracy of the camera calibration. Regarding claim 2 (dependent on claim 1), Katz in view of Southworth teaches wherein the calibration processing section calibrates the parameter used for estimation of the position and the posture of the display section by using the correction information (Katz, adjustments to calibration parameters, also known as the correction information, calibrates the imaging device and headset, see para 59; parameters used to output slow calibration data, which is used in the position/posture estimation, may be adjusted, para 39: “Examples of imaging parameters include: focal length, focus, frame rate, ISO, shutter speed, aperture, camera orientation…or any other parameter used by the imaging device 135 to output slow calibration data”). Regarding claim 4 (dependent on claim 1), Katz in view of Southworth teaches comprising: a determining section (Katz, para 50: “monitoring module 350”) that determines whether or not to execute the calibration, wherein the determining section determines that calibration is unnecessary in a case where, in the determination, a difference between the first estimation information and the second estimation information is smaller than a threshold (Katz, adjustments are made until distance is within a threshold, thus calibration will not be executed if the distance is smaller than the threshold, para 59: “VR console 110 further adjusts 480 one or more calibration parameters until intermediate estimated positions of the VR headset 105 received from the fast calibration data are within a threshold distance”; para 62: “if a distance between a curve of predicted positions of the reference point 215 and an intermediate estimated position of the reference point 215 is less than a threshold distance (e.g., 1 mm), the VR console 110 provides the intermediate estimated position to the VR engine 155”). Regarding claim 5 (dependent on claim 1), Katz in view of Southworth teaches wherein the second estimation information is assumed to be information regarding the position and the posture of the display section that are estimated on a basis of the second captured image captured such that a predetermined location on a housing having the display section is identifiable (Katz, the display is part of the front rigid body, para 31: “The tracking module 150 re-calibrates using slow calibration data including one or more images that include locators 120 on the front rigid body and locators on the rear rigid body”; para 34: “The front rigid body 205 includes the electronic display 115 (not shown), the IMU 130, the one or more position sensors 125, and the locators 120”). Regarding claim 6 (dependent on claim 5), Katz in view of Southworth teaches wherein the second estimation information is assumed to be information regarding the position and the posture of the display section that are estimated on a basis of the second captured image capturing an image of a marker provided at the predetermined location (Katz, para 20: “The locators 120 are objects located in specific positions on the VR headset 105 relative to one another and relative to a specific reference point on the VR headset 105. A locator 120 may be…a reflective marker”; see also para 25 citation in claim 1). Regarding claim 7 (dependent on claim 1), Katz in view of Southworth teaches wherein the calibration processing section performs the calibration in a case where the difference between the first estimation information and the second estimation information is equal to or greater than a threshold (Katz, adjustments are made until distance is within a threshold, thus calibration will be executed if the distance is larger than the threshold, para 59: “VR console 110 further adjusts 480 one or more calibration parameters until intermediate estimated positions of the VR headset 105 received from the fast calibration data are within a threshold distance”; para 62: “if a distance between a curve of predicted positions of the reference point 215 and an intermediate estimated position of the reference point 215 is less than a threshold distance (e.g., 1 mm), the VR console 110 provides the intermediate estimated position to the VR engine 155”). Regarding claim 8 (dependent on claim 1), Katz in view of Southworth teaches wherein the calibration processing section (See note below regarding calibration processing section) performs the calibration about a first parameter used for estimation of the first estimation information (Southworth, calibrate position of HMD, para 35: “registration can be used to determine (or recalibrate) position of a registered device relative to boundaries of a room or another reference”; claim 2: “applying an offset to calibrate the location information of the HMD”). In the combination of Katz in view of Southworth in claim 1, the first estimation information is determined using a camera of the headset, taught by Southworth. Katz teaches calibration using the calibration parameters on the imaging device, a camera, as well as the IMU. Thus, the calibration processing section of Katz could perform calibration for the first and second cameras in the apparatus of Katz in view of Southworth in order to improve the performance of the first camera, thus improving the position/posture tracking (Katz, para 25: “The imaging device 135 receives one or more calibration parameters from the VR console 110, and may adjust one or more imaging parameters (e.g., focal length, focus, frame rate, ISO, sensor temperature, shutter speed, aperture, etc.) based on the calibration parameters”). Regarding claim 9 (dependent on claim 8), Katz in view of Southworth teaches wherein the first parameter is assumed to be a parameter for identifying a positional relation between the first camera and the display section (Southworth, camera offset to modify the display, para 22: “the sticker system 100 can use the received information to calculate a positional offset 135 from the position of the HMD 110 to position of the sticker 160. The sticker system 100 may use the offset 135 to generate, modify, or display digital content 150 relative to position of the HMD 110 and/or sticker 160”). Regarding claim 10 (dependent on claim 8), Katz in view of Southworth teaches wherein the first parameter includes at least any one of an optical-axis direction, a focal length, and a distortion of the first camera (Katz, focal length, para 25: “The imaging device 135 receives one or more calibration parameters from the VR console 110, and may adjust one or more imaging parameters (e.g., focal length, focus, frame rate, ISO, sensor temperature, shutter speed, aperture, etc.) based on the calibration parameters”; see explanation in claim 8 wherein the calibration processing section could apply the imaging device calibration to the first camera in Katz in view of Southworth). Regarding claim 11 (dependent on claim 1), Katz in view of Southworth teaches wherein the calibration processing section performs the calibration for a second parameter used for estimation of the second estimation information (Katz, adjustment of one or more imaging parameters, see para 25 citation in claim 1). Regarding claim 12 (dependent on claim 11) Katz in view of Southworth teaches wherein the second parameter includes at least any one of an optical-axis direction, a focal length, and a distortion of the second camera (Katz, focal length, para 25: “The imaging device 135 receives one or more calibration parameters from the VR console 110, and may adjust one or more imaging parameters (e.g., focal length, focus, frame rate, ISO, sensor temperature, shutter speed, aperture, etc.) based on the calibration parameters”). Regarding claim 13 (dependent on claim 11), Katz in view of Southworth teaches wherein the calibration processing section performs calibration of the second parameter on a basis of an image capturing area of a particular target object in the second captured image (Katz, calibration is performed based on the markers on the headset, para 20: “The locators 120 are objects located in specific positions on the VR headset 105 relative to one another and relative to a specific reference point on the VR headset 105. A locator 120 may be…a reflective marker”; para 25: “Slow calibration data includes one or more images showing observed positions of the locators 120 that are detectable by the imaging device 135.”). Regarding claim 14 (dependent on claim 1), Katz in view of Southworth teaches wherein the calibration processing section performs the calibration when a first mode in which a predetermined process is performed by using the first estimation information is switched to a second mode in which the predetermined process is performed by using the second estimation information (Katz in view of Southworth teaches predetermined processes using both the first estimation information and second estimation information, where the position and posture of the display section is acquired; The calibration method taught by Katz requires the second estimation information, thus the system must switch to a mode where the second estimation information is collected and processed to perform the calibration – see para 59 wherein the parameter adjustment module performs the calibration based on the second estimation information, see also claim 1 rejection). Regarding claim 17 (dependent on claim 1), Katz in view of Southworth teaches wherein the information processing apparatus is assumed to be a head mounted display apparatus including the acquiring section and the calibration processing section (Katz, Virtual reality system environment 100 in para 16 and FIG 1 – the system includes a VR headset, and is thus a head mounted display apparatus; see FIG 1 wherein the system includes the acquiring and calibration sections introduced in the claim 1 rejection). Regarding claim 19, Katz teaches an information processing method executed by a computer apparatus (Katz, para 88: “These operations, while described functionally, computationally, or logically, are understood to be implemented by computer programs or equivalent electrical circuits, microcode, or the like”; see also para 89-91). All further claim limitations are met and rendered obvious by Katz in view of Southworth because the method steps of claim 19 are the same as those performed by the information processing apparatus in claim 1. Regarding claim 20, Katz teaches a storage medium being read by a computer and having stored thereon a program that causes a computation processing apparatus to execute functions (Katz, para 89: “a software module is implemented with a computer program product comprising a computer-readable medium containing computer program code, which can be executed by a computer processor for performing any or all of the steps, operations, or processes described”). All further claim limitations are met and rendered obvious by Katz in view of Southworth because the executed steps of claim 20 are the same as those performed by the information processing apparatus in claim 1. Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Katz in view of Southworth, in further view of Wakai et al. (U.S. Patent No. 2018/0300901 A1), hereinafter Wakai. Regarding claim 3 (dependent on claim 1), Katz in view of Southworth fails to teach comprising: a determining section that determines whether or not to execute the calibration, wherein the determining section performs the determination on a basis of whether or not the parameter is outside an expressible range. However, Wakai teaches a camera calibration method (Wakai, abstract) that determines whether or not to execute the calibration, wherein the determining section performs the determination on a basis of whether or not the parameter is outside an expressible range (Wakai, minimizing the distance measurement error is outside an expressible range; since it is impossible, minimizing that value is not executed, para 25: “the calibration error greatly affects the accuracy of the stereo distance measurement and it is impossible to minimize the distance measurement error in a wide range of field of view including the image outer peripheral portion”). Katz in view of Southworth discloses a base method for calibrating a camera, but does not specify what occurs when parameters are calculated outside an expressible range for the system. Wakai teaches a method for calibrating a camera wherein parameters outside an expressible range may be determined in certain situations. Wakai teaches a known technique wherein calibration cannot be performed if the determined parameter value is inaccurate, or impossible, given the sensor or device. If a calculated parameter value of the calibration is impossible, the calibration cannot be executed. A person having ordinary skill in the art, before the effective filing date of the claimed invention, could have applied the known technique, as taught by Wakai, in the same way to the apparatus and determining section of Katz in view of Southworth and achieved predictable results of increasing the accuracy of the system by not performing the calibration process using erroneous values. Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Katz in view of Southworth, in further view of ROS Wiki. How to Calibrate a Stereo Camera. Internet Archive, 5 March 2021 [online], [retrieved on 2025-12-23]. Retrieved from the Internet <URL: https://web.archive.org/web/20210305073450/https:/wiki.ros.org/camera_calibration/Tutorials/StereoCalibration>, hereinafter ROS. Regarding claim 15 (dependent on claim 1), Katz in view of Southworth fails to teach wherein the calibration processing section performs a process of comparing image-capturing times of the first captured image used for estimation of the first estimation information and the second captured image used for estimation of the second estimation information that are used for the calibration, and performs the calibration on a basis of the first estimation information and the second estimation information estimated on a basis of the first captured image and the second captured image whose image-capturing times are determined to have a difference which is smaller than a threshold. However, ROS teaches a method for camera calibration using two images from two different cameras (ROS, see right camera image and left camera image in the code section in pg. 2, section 4 “Start the Calibration”), comprising a process of comparing image-capturing times of a first captured image and a second captured image that are used for the calibration, and performs the calibration on a basis of the first captured image and the second captured image whose image-capturing times are determined to have a difference which is smaller than a threshold (ROS, pg. 2, section 4: “the camera calibrator to work with images that do not have the exact same timestamp. Currently it is set to 0.1 seconds. In this case, as long as the timestamp difference is less than 0.1 seconds, the calibrator will run with no problem”). Katz in view of Southworth discloses a base method for calibrating a camera using first and second captured images, but does not specify specific methods for performing the calibration based on the image-capturing times. ROS teaches a method for calibrating a camera using first and second captured images, based on the difference in image-capturing times. ROS teaches a known technique of filtering out image pairs that differ greatly in capture time, ensuring that the images used portray the same or a very similar moment in time. A person having ordinary skill in the art, before the effective filing date of the claimed invention, could have applied the known technique, as taught by ROS, in the same way to the first and second estimation information of Katz in view of Southworth and achieved predictable results of improving the calibration result by using corresponding image pairs. In the calibration, two different estimations of the position and posture of the head mounted display are compared. If the images differed greatly in time, then one estimation could have been captured after the user had moved, thus rendering the calibration obsolete. Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Katz in view of Southworth, in further view of Ichikawa et al. (WO Patent No. 2019130900 A1), hereinafter Ichikawa. Regarding claim 16 (dependent on claim 1), Katz in view of Southworth fails to teach comprising: a display processing section that executes a first display process of superimposing a virtual object on a basis of the first estimation information, and a second display process of superimposing the virtual object on a basis of the second estimation information; and a presentation processing section that presents choices for allowing selection of either the virtual object superimposed by the first display process or the virtual object superimposed by the second display process. However, Ichikawa teaches a head mounted display for displaying augmented reality content (Ichikawa, pg. 9-10, para 17-18). Ichikawa teaches a display processing section (pg. 19, para 40: “correction unit 104 corrects parameters related to the display positions of one or more virtual objects based on the user's instruction information for one or more correction objects displayed on the display unit 124 under the control of the display control unit 106”) that executes a first display process of superimposing a virtual object on a basis of a first estimation information (pg. 20, para 42: “position of depth object 50a”), and a second display process of superimposing the virtual object on a basis of a second estimation information (pg. 20, para 42: “position of depth object 50b”); and a presentation processing section that presents choices for allowing selection of either the virtual object superimposed by the first display process or the virtual object superimposed by the second display process (pg. 20, para 42: “when instruction information from the user instructing that the position of depth object 50a be moved to the position of depth object 50b is acquired, the correction unit 104 corrects the value of the parameter related to the display position of the one or more virtual objects by the amount of movement of the depth object 50 indicated by the instruction information”; see related FIG 5 attached below). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the display processing section, as taught by Ichikawa, with the apparatus and first and second estimation information of Katz in view of Southworth in order to allow the user to aid the system in correctly displaying virtual objects when the sensors of the system fail to display them accurately (Ichikawa, pg. 13, para 26: “Such an error may occur, for example, when the sensing accuracy of a depth sensor (described later) included in the eyewear 10 is low. For example, the error may occur due to an inaccurate value being set as the value of the internal parameter of the depth sensor”). PNG media_image1.png 297 413 media_image1.png Greyscale Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Katz in view of Southworth, in further view of Moreno Parejo (ES Patent No. 2704327 A1), hereinafter Moreno. Regarding claim 18 (dependent on claim 1), Katz in view of Southworth fails to teach wherein the space where the user exists is assumed to be an inner space of a mobile body. However, Moreno teaches a similar system (Moreno, method for displaying virtual reality information, see pg. 1-2, para 3-11) wherein the space where the user exists is assumed to be an inner space of a mobile body (Moreno, pg. 4, para 36: “the invention consists of a method for displaying virtual reality information in a vehicle, where a virtual reality device is used by a user inside the vehicle”). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the apparatus of Katz in view of Southworth with the mobile body of Moreno in order to mobilize the virtual reality headset system, allowing the user to use the system in a plurality of locations (Moreno, head mounted display is used while driving, pg. 19, para 160: “The virtual reality device 3 is preferably placed on the head 21 of the user 2, whether a driver or a passenger when they are inside a vehicle 1”). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Liao et al. (U.S. Patent No. 2022/0066547 A1) teaches a system with a head mounted display and fixed camera (see abstract and FIG 1) wherein a transformation matrix is utilized (para 46). Any inquiry concerning this communication or earlier communications from the examiner should be directed to EMMA E DRYDEN whose telephone number is (571)272-1179. The examiner can normally be reached M-F 9-5 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ANDREW BEE can be reached at (571) 270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EMMA E DRYDEN/Examiner, Art Unit 2677 /JAYESH A PATEL/Primary Examiner, Art Unit 2677
Read full office action

Prosecution Timeline

Sep 15, 2023
Application Filed
Dec 23, 2025
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12561873
IMAGE PROCESSING APPARATUS AND METHOD
2y 5m to grant Granted Feb 24, 2026
Patent 12543950
SLIT LAMP MICROSCOPE, OPHTHALMIC INFORMATION PROCESSING APPARATUS, OPHTHALMIC SYSTEM, METHOD OF CONTROLLING SLIT LAMP MICROSCOPE, AND RECORDING MEDIUM
2y 5m to grant Granted Feb 10, 2026
Patent 12526379
AUTOMATIC IMAGE ORIENTATION VIA ZONE DETECTION
2y 5m to grant Granted Jan 13, 2026
Patent 12340443
METHOD AND APPARATUS FOR ACCELERATED ACQUISITION AND ARTIFACT REDUCTION OF UNDERSAMPLED MRI USING A K-SPACE TRANSFORMER NETWORK
2y 5m to grant Granted Jun 24, 2025
Study what changed to get past this examiner. Based on 4 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
58%
Grant Probability
83%
With Interview (+25.0%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 12 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month