Prosecution Insights
Last updated: April 19, 2026
Application No. 18/733,227

INFORMATION PROCESSING DEVICE TO CONTROL DISPLAY OF IMAGE, CONTROL METHOD FOR INFORMATION PROCESSING DEVICE, AND STORAGE MEDIUM

Non-Final OA §103
Filed
Jun 04, 2024
Examiner
GOCO, JOHN PATRICK
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Canon Kabushiki Kaisha
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
8 currently pending
Career history
8
Total Applications
across all art units

Statute-Specific Performance

§103
68.8%
+28.8% vs TC avg
§102
18.8%
-21.2% vs TC avg
§112
12.5%
-27.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority 2. Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Claim Interpretation 3. The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: a display control unit, a first obtaining unit, a second obtaining unit, an image obtaining unit, an estimation unit, and a third obtaining unit in claim 1. Because these claim limitations are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 4. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 5. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. 6. Claims 1, and 5-8 are rejected under 35 U.S.C. 103 as being unpatentable over US 9884248 B2 (Koseki et al, hereinafter Koseki) in view of US 11467660 B2 (Salmani et al, hereinafter Salmani). Regarding claim 1, Koseki teaches an information processing device connected to or integrated into a display device configured to receive an input via a controller (Par 20 “According to another embodiment of the invention, there is provided an image generation device that generates an image that is displayed on a head-mounted display (HMD) that is worn on a head of a user, the image generation device comprising: a captured image-receiving section that receives data that represents a captured image from an imaging section, the imaging section being provided so as to capture a real space that includes the HMD, and a controller that is held and operated by the user; a controller position calculation section that calculates a virtual position of the controller within a virtual space using the captured image, the virtual space being a display image space of the HMD; and a first guide display control section that displays a first guide display on the HMD using the virtual position, the first guide display indicating a position within the virtual space that corresponds to a position of the controller within the real space.”) the information processing device comprising: a display control unit configured to control the display device to display a virtual object (Par 20 “there is provided an image generation device that generates an image that is displayed on a head-mounted display (HMD) that is worn on a head of a user”) a first obtaining unit configured to obtain a first amount of change as an amount of change in one of a position and orientation of the controller (Par 80 “It is possible to determine the posture of the game controller 1200 and a change in the posture of the game controller 1200 at the position Pc using the 6-axis sensor 1208 included in the game controller 1200.”) A second obtaining unit configured to obtain a second amount of change as an amount of change in one of a position and an orientation of the display device; (Par 118 “The head posture change detection section 102 detects the posture of the head of the player 2 and a change in the posture of the head of the player 2”) an image obtaining unit configured to obtain a captured image; (Par 19 “causing the computer to receive data that represents a captured image from an imaging section”) wherein the display control unit controls the display device to display the virtual object based on a fourth amount of change, the fourth amount of change being an amount of change in one of a position and an orientation based on the first amount of change, the second amount of change, and the third amount of change. (Par 214 “The above embodiments have been described taking an example in which the posture of the HMD 1310 is detected using the image captured by the range sensor unit 1100 and the detection results of the 6-axis sensor 1308 included in the HMD 1310. Note that the configuration is not limited thereto.” Par 136 “The direction guide object control section 224 disposes the direction guide object 13 in the virtual space, and controls the position (movement) and the posture of the direction guide object 13. Specifically, the direction guide object control section 224 controls the position of the direction guide object 13 so that the direction guide object 13 is always situated at a given position within the field of view (game screen) of the HMD 1310, and controls the posture of the direction guide object 13 so that the direction guide object 13 faces in the direction of the game controller 1200 with respect to (when viewed from) the HMD 1310.” Where the position or change in position of the virtual object (the direction guide object) is displayed based on the changes in position of the HMD obtained from the image captured, the position of the HMD based on the 6-axis sensor, and the position of the controller.) Regarding claim 1, Koseki fails to explicitly teach an estimation unit configured to estimate one of the position and the orientation of the display device based on the captured image. In related endeavor, Salmani teaches an estimation unit configured to estimate one of the position and the orientation of the display device based on the captured image (Par 17 “The data generated by the IMU, along with the stereo imagery captured by the external-facing cameras 105A-B, allow the system 100 to compute the pose of the HMD using, for example, SLAM (simultaneous localization and mapping) or other suitable techniques.”) It would have been obvious to a person of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified Koseki to include an estimation unit configured to estimate one of the position and the orientation of the display device based on the captured image as taught by Salmani. Doing so would allow an appropriate display to be rendered for the user (Par 17 “For example, in order to render an appropriate display for the user 102 while he is moving about in a virtual or augmented reality environment, the system 100 would need to determine his position and orientation at any moment”). Regarding claim 5, Koseki as modified by Salmani teaches the information processing device according to claim 1, and Koseki further teaches wherein the display device is a device to be worn on a user's head (par 20 “there is provided an image generation device that generates an image that is displayed on a head-mounted display (HMD) that is worn on a head of a user”) Regarding claim 6, Koseki as modified by Salmani teaches the information processing device according to claim 1, and Koseki further teaches wherein the display control unit controls the display device to display a background image based on the third amount of change. (Par 24 “The method may further comprise: causing the computer to determine a position and/or a posture of the HMD in the real space; causing the computer to generate an image of the virtual space that is displayed on the HMD, the image of the virtual space that is displayed on the HMD changing in field of view corresponding to the determined position and/or the determined posture of the HMD”). Regarding claim 7, the method claim 7 is similar in scope to claim 1, and is rejected under the same rationale. Regarding claim 8, the non-transitory computer readable medium claim 8 is similar in scope to claim 1, and is rejected under the same rationale. 7. Claims 2-4 are rejected under 35 U.S.C. 103 as being unpatentable over Koseki in view of Salmani as applied to claim 1 above, and further in view of US 20170352188 A1 (David A. Levitt, hereinafter Levitt). Regarding claim 2, Koseki as modified by Salmani fail to explicitly teach wherein the fourth amount of change is an amount of change obtained by subtracting the second amount of change from the sum of the first amount of change and the third amount of change. In related field of endeavor, Levitt teaches subtracting an amount of change in position from another amount of change in position, and obtaining a sum of two changes in position (Par 172 “At the master device the motion data from multiple devices may be used to calculate motions of multiple body parts and Display Device 100 … For example, first relative motion of device A to device B may be calculated, then relative motion of device B to device C, finally to confirm the first two calculations the relative motion of device C to device A is calculated. This third relative motion should be the sum of the first two. The calculations can be repeated in an iterative process until agreement in the results is reached”, Par 175 “the isolation of movement related to anatomical dimensions is used to subtract movement that is unrelated to the rotation of joints (e.g., wrist) in the human body.”) It would have been obvious to a person of ordinary skill in the art prior to the effective filing date of the claimed invention to further modify Koseki as modified by Salmani to include subtracting an amount of change in position from another amount of change in position, and obtaining a sum of two changes in position as taught by Levitt. Doing so would allow vehicle motion to be excluded from movement calculations (Par 11 “Various embodiments include detecting relative motion associated with a user's head or limbs, and at times ignoring absolute motion such results from walking or riding in a vehicle.”) Regarding claim 3, Koseki as modified by Salmani fail to explicitly teach wherein the fourth amount of change is an amount of change obtained by subtracting an amount of change obtained by subtracting the third amount of change from the second amount of change from the first amount of change. In related field of endeavor, Levitt teaches subtracting an amount of change in position from another amount of change in position. (Par 175 “the isolation of movement related to anatomical dimensions is used to subtract movement that is unrelated to the rotation of joints (e.g., wrist) in the human body.”) It would have been obvious to a person of ordinary skill in the art prior to the effective filing date of the claimed invention to further modify Koseki as modified by Salmani to include subtracting an amount of change in position from another amount of change in position as taught by Levitt. doing so would allow vehicle motion to be excluded from movement calculations (Par 11 “Various embodiments include detecting relative motion associated with a user's head or limbs, and at times ignoring absolute motion such results from walking or riding in a vehicle.”) Regarding claim 4, Koseki as modified by Salmani fail to explicitly teach wherein a case where a user riding in a moving body uses the display device, the fourth amount of change is an amount of change of the controller relative to the moving body. In related field of endeavor, Levitt teaches an amount of change of the controller relative to the moving body. (Par 11 “Various embodiments include detecting relative motion associated with a user's head or limbs, and at times ignoring absolute motion such results from walking or riding in a vehicle”, Par 175 “Some of the movements detected by Motion Sensing Device 120 include motions of the car and some motions detected by Motion Sensing Device 120 include motions resulting from turning of the user's wrist and/or elbow. Those motions that are inconsistent with the turning of the user's wrist and/or elbow can be discounted by Anchor Point Logic 1130 and/or Image Generation Logic 1140 such that the motion of the wrist and/or elbow controls the images presented at Display 110. The motion of the car does not significantly (or not at all) impact the images presented at Display 110.”) It would have been obvious to a person of ordinary skill in the art prior to the effective filing date of the claimed invention to further modify Koseki as modified by Salmani to include an amount of change of a controller relative to a moving body as taught by Levitt. Doing so would allow vehicle motion to be ignored and excluded from movement calculations (Par 11 “Various embodiments include detecting relative motion associated with a user's head or limbs, and at times ignoring absolute motion such results from walking or riding in a vehicle.”) Conclusion 8. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHN PATRICK GOCO whose telephone number is (571)272-5872. The examiner can normally be reached M-Th, 7:00 am - 5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Chan can be reached at (571) 272-3022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JOHN P GOCO/ Examiner, Art Unit 2619 /JASON CHAN/ Supervisory Patent Examiner, Art Unit 2619
Read full office action

Prosecution Timeline

Jun 04, 2024
Application Filed
Feb 05, 2026
Non-Final Rejection — §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month