Prosecution Insights
Last updated: April 19, 2026
Application No. 17/784,109

A CAMERA SYSTEM FOR A MOBILE DEVICE, A METHOD FOR LOCALIZING A CAMERA AND A METHOD FOR LOCALIZING MULTIPLE CAMERAS

Final Rejection §103§112
Filed
Jun 10, 2022
Examiner
GOEBEL, EMMA ROSE
Art Unit
2662
Tech Center
2600 — Communications
Assignee
Sony Semiconductor Solutions Corporation
OA Round
4 (Final)
53%
Grant Probability
Moderate
5-6
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 53% of resolved cases
53%
Career Allow Rate
24 granted / 45 resolved
-8.7% vs TC avg
Strong +47% interview lift
Without
With
+47.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
40 currently pending
Career history
85
Total Applications
across all art units

Statute-Specific Performance

§101
18.2%
-21.8% vs TC avg
§103
60.1%
+20.1% vs TC avg
§102
11.8%
-28.2% vs TC avg
§112
8.4%
-31.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 45 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgement is made of Applicant’s claim of priority from National Stage Application No. PCT/EP2020/082448, filed November 17, 2020 and Foreign Application No. EP19218285.5, filed December 19, 2019. Status of Claims Claims 1, 5, 7-12 and 18 are pending. Claims 2-4 and 6 have been cancelled. Response to Arguments Applicant’s arguments, see p. 9-10, filed October 22, 2025, with respect to the 35 USC 103 rejections have been fully considered but are moot because of the new grounds of rejection, presented below. Applicant argues that the cited references fail to disclose “continuously determine the pose during operation without requiring pre-calibration or re-calibration in a controlled environment”. However, in an analogous field of endeavor, Noble teaches an automatic calibration that allows the system to self-align during vehicle operation if the camera orientations are significantly varied by bumps, etc. (see Noble, Para. [0215]). Therefore, the 35 USC 103 rejection of the claims is upheld, and consequently, THIS ACTION IS FINAL. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 1, 5, 7-12 and 18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 1 and 18 recite the newly added limitation "continuously determine the pose during operation without requiring pre-calibration or re-calibration in a controlled environment” There is insufficient antecedent basis for “the pose” in the claim. Examiner suggests moving this limitation after the limitation “determine a pose of the camera…” to remedy the lack of antecedent basis. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 5, 7-11 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Clarence Chui (US 10,791,319 B1) in view of Zhou et al. (US 2020/0357141 A1, Continuation of application No. PCT/CN2018/073866, filed January 23, 2018 – US PGPub used herein as a translation and for mapping purposes) further in view of Horvath et al. (US 10,776,928 B1, filed August 6, 2019), Roose et al. (US 10,635,844 B1), Shin (US 2017/0154219 A1) and Noble (US 2019/0206084 A1). Regarding claim 1, Chui teaches a camera system for a mobile device, comprising: processing circuitry (Chui, Col. 2, lines 5-7, processor refers to one or more devices, circuits, and/or processing cores configured to process data, such as computer program instructions) configured to determine a change of perspective of the camera based on a shift between a first patch in a first image of the sequence of images indicative of a target within the first image and a second patch in a second image following the first image of the sequence of images indicative of the target within the second image (Chui, Col. 5, lines 1-35, slight changes in camera position generate slightly different perspectives of the same subject which can be used in combination with known camera parameters to estimate distance and pose relative to the subject. For each camera, small motions should be introduced either deliberately or inadvertently by an associated user so that the same features in different frames of the video sequence appear in different locations or with different relative spacings. For a subject with a feature-rich appearance (i.e., a subject with distinct markings such as edges, corners, complex surface patterns, etc.), many features detected in one frame can be identified in other subsequent frames in the video. Standard methods, including the standard SIFT (scale-invariant feature transform) methods may be applied), wherein the first patch covers unique photo-differences of the first image that correspond to a distinct portion of a contour of the target, wherein the second patch is determined by comparing the first patch with the second image to locate the photo- differences within the second image (Chui, Col. 5, lines 1-35, For a subject with a feature-rich appearance (i.e., a subject with distinct markings such as edges, corners, complex surface patterns, etc.), many features detected in one frame can be identified in other subsequent frames in the video. Standard methods, including the standard SIFT (scale-invariant feature transform) methods may be applied. Correspondence is established for features between at least two frames within a recorded sequence captured by a prescribed camera), determine a pose of the camera within the environment from a correlation between the motion data and the change of perspective (Chui, Col. 3, line 54-Col 4, line 8, a relative pose of each camera with respect to the common set of identified points in the scene is determined. In various embodiments, the relative pose of each camera may be determined with respect to any combination of one or more points in the common set of points. The image and/or video data from each camera may be processed using feature extraction, tracking, and/or correspondence algorithms to generate an estimate of camera pose with respect to image content, i.e., the common set of identified points. Moreover, sensor data associated with the camera may be employed to verify and/or provide a parallel estimate of individual camera pose as well as fill in gaps for pose estimate when pose cannot be obtained visually, e.g., during instances of time when the camera field of view does not include the scene under consideration and/or the common set of identified points. Furthermore, motion estimates from sensor data may be employed to calculate the approximate view from each camera as a function of time), determine a position of the target from the correlation between the change of perspective and the motion data by scaling a visually measurable pose to obtain an absolute pose of the camera towards the target (Chui, Col. 5, lines 1-35, once correspondence is established for features between at least two of the frames of a recorded sequence captured by an individual device, the relative pose between the two (or more positions) as well as the relative positions of the features identified in 3D space (i.e., target) can be estimated simultaneously. Absolute distances and scale between multiple views are calculated by either estimating the focal lengths using known methods or by foreknowledge of the camera parameters being used to record the sequence). Although Chui teaches a scene comprising a plurality of objects is simultaneously filed from various angles and perspectives by a plurality of mobile phone cameras (Chui, Col. 2, lines 28-49), Chui does not explicitly teach “at least one camera, wherein the camera is freely mounted to the mobile device, wherein the camera is configured to provide a sequence of images of an environment” and “at least one motion sensor configured to provide motion data of the camera”. However, in an analogous field of endeavor, Zhou teaches a payload, such as a camera or video system, connected or attached to a movable object by a carrier, which may allow for one or more degrees of relative movement between the payload and movable object (Zhou, Para. [0023]) and sensors, including an inertial (IMU) sensor, for detecting movement in one or more dimensions, including rotational and translational movements (Zhou, Para. [0036]). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Chui with the teachings of Zhou by including a camera freely mounted to the mobile device and a motion sensor for providing motion data. One having ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to combine these references because doing so would allow for calibrating an optical system on a movable object, as recognized by Zhou. Although Chui in view of Zhou teaches standard SIFT (scale-invariant feature transform) may be applied for identifying features in subsequent frames (Chui, Col. 5, lines 1-35), they do not explicitly teach “wherein the shift includes affine correspondences between the first and second patches defining a transition”. However, in an analogous field of endeavor, Horvath teaches determining relative displacements between pairs of image patches obtained from the previous frame and current frame edge images. This results in a local translation vector (Horvath, Col. 5, lines 61-67). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Chui in view of Zhou with the teachings of Horvath by including the shift between patch positions includes affine correspondences (i.e., translation vector) between the first and second patches defining a transition. One having ordinary skill in the art would have been motivated to combine these references because doing so would allow for improved motion estimation accuracy, as recognized by Horvath. Although Chui in view of Zhou further in view of Horvath teaches motion estimates from sensor data may be employed to calculate the approximate view from each camera as a function over time (Chui, Col. 3 line 54-Col. 4, line 8), they do not explicitly teach “wherein the correlation provides an observation model that relates the motion data and visual data from the images”. However, in an analogous field of endeavor, Roose teaches the object detector model may emulate position and velocity variance and cross variance of a Kalman filter (Roose, Col. 4, lines 39-50). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Chui in view of Zhou further in view of Horvath with the teachings of Roose by including an observation model relating the motion data and visual data. One having ordinary skill in the art would have been motivated to combine these references because doing so would allow for generating synthetic object data for a sensor, as recognized by Roose. Although Chui in view of Zhou further in view of Horvath and Roose teaches determining the position of the target (Chui, Col. 5, lines 1-35), they do not explicitly teach to “register the position of the target in a digital map of the environment”. However, in an analogous field of endeavor, Shin teaches a map creating unit that is provided with the current position information estimated by the position recognizing unit, and reconstructs a pose graph based on the provided position information, and updates the previously stored key frame set based on the reconstructed pose graph. Shin further teaches a map may be configured by the set of key frames (Shin, Para. [0166]). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the camera system of Chui in view of Zhou further in view of Horvath and Roose with the teachings of Shin by including creating a map of the environment based on the position information. One having ordinary skill in the art would have been motivated to combine these references because doing so would allow for mobile robots to recognize a space and their own position within that space, as recognized by Shin. Although Chui in view of Zhou further in view of Horvath, Roose and Shin teaches X, they do not explicitly teach to “continuously determine the pose during operation without requiring pre-calibration or re-calibration in a controlled environment”. However, in an analogous field of endeavor, Noble teaches a system may determine the camera pose actively using on-board orientation sensors mounted to the cameras. The cameras are fitted with actuators for actively adjusting the orientations of cameras. This automatic calibration allows the system to self-align during vehicle operation if the camera orientations are significantly varied by bumps etc. By accurately knowing the pose of the cameras, errors associated with tracking or projecting objects from one camera pose to another are greatly reduced (Noble, Para. [0125]). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the camera system of Chui in view of Zhou further in view of Horvath, Roose and Shin with the teachings of Noble by including that the pose is continuously determined using automatic calibration (i.e., without requiring pre-calibration or re-calibration in a controlled environment). One having ordinary skill in the art would have been motivated to combine these references because doing so would allow for automatic camera calibration for accurately tracking objects with reduced errors, as recognized by Noble. Thus, the claimed invention would have been obvious to one having ordinary skill in the art before the effective filing date. Regarding claim 5, Chui in view of Zhou further in view of Horvath, Roose, Shin and Noble teaches the camera system of claim 1, and further teaches wherein the data processing circuitry is further configured to determine a third patch indicative of the target within a third image following the second image by comparing the first patch with the third image (Zhou, Para. [0057], lines 2-4; Fig. 8, multiple images are captured over time, and the disparity of each feature point between images is calculated to identify whether the feature point is a far point. Although four images are shown in Fig. 8 for exemplary purposes, it is contemplated that fewer or more images may be captured); and determine the change of perspective from a shift between the first and the third patch (Chui, Col. 5, lines 1-35, refining the position/pose of each camera relative to the subject (i.e., target) by taking advantage of small changes in perspective. Slight changes in camera position generate slightly different perspectives of the same subject which can be used in combination with known camera parameters to estimate distance and pose relative to the subject. Once correspondence is established for features between at least two of the frames of a recorded sequence captured by an individual device, the relative pose between the two (or more positions) as well as the relative positions of the features identified in 3D space (i.e., position of the target) can be estimated simultaneously). The proposed combination as well as the motivation for combining the Chui, Zhou, Horvath, Roose, Shin and Noble references presented in the rejection of Claim 1, apply to Claim 5 and are incorporated herein by reference. Thus, the system recited in Claim 5 is met by Chui in view of Zhou further in view of Horvath, Roose, Shin and Noble. Regarding claim 7, Chui in view of Zhou further in view of Horvath, Roose, Shin and Noble teaches the system of claim 1, and further teaches wherein the data processing circuitry is further configured to determine a velocity of the target by tracking the position of the target (Shin, Para. [0115]-[0119], obtaining an angular velocity θ g of a robot using a gyroscope and obtaining an angular velocity obtained θ c using a wide angle stereo vision). The proposed combination as well as the motivation for combining the Chui, Zhou, Horvath, Roose, Shin and Noble references presented in the rejection of Claim 1, apply to Claim 7 and are incorporated herein by reference. Thus, the system recited in Claim 7 is met by Chui in view of Zhou further in view of Horvath, Roose, Shin and Noble. Regarding claim 8, Chui in view of Zhou further in view of Horvath, Roose, Shin and Noble teaches the camera system of claim 1 and further teaches wherein the data processing circuitry is further configured to: determine a fourth patch indicative of the target within a fourth image based on the pose of the camera and the digital map (Zhou, Para. [0057], lines 2-4; Fig. 8, Zhou teaches multiple images are captured over time, and the disparity of each feature point between images is calculated to identify whether the feature point is a far point. Although four images are shown in Fig. 8 for exemplary purposes, it is contemplated that fewer or more images may be captured). The proposed combination as well as the motivation for combining the Chui, Zhou, Horvath, Roose, Shin and Noble references presented in the rejection of Claim 1, apply to Claim 8 and are incorporated herein by reference. Thus, the system recited in Claim 8 is met by Chui in view of Zhou further in view of Horvath, Roose, Shin and Noble. Regarding claim 9, Chui in view of Zhou further in view of Horvath, Roose, Shin and Noble teaches the camera system of claim 1, and further teaches wherein the motion measurement sensor comprises an inertial measurement unit, IMU, which is rigidly mounted to the camera and configured to provide at least a portion of the motion data (Zhou, Para. [0029], Zhou teaches exemplary control system that may be included on, connected to, or otherwise associated with the movable object, that may include one or more sensors for determining changes in posture and/or location of a movable object. Para. [0036], line 2, Zhou teaches sensors may include an inertial (IMU) sensor). The proposed combination as well as the motivation for combining the Chui, Zhou, Horvath, Roose, Shin and Noble references presented in the rejection of Claim 1, apply to Claim 9 and are incorporated herein by reference. Thus, the system recited in Claim 9 is met by Chui in view of Zhou further in view of Horvath, Roose, Shin and Noble. Regarding claim 10, Chui in view of Zhou further in view of Horvath, Roose, Shin and Noble teaches the camera system of claim 1, and further teaches wherein the motion measurement sensor comprises a global positioning system, GPS, sensor which is installed at the mobile device and configured to provide at least a portion of the motion data (Zhou, Para. [0029], Zhou teaches a control system that may be included on, connected to, or otherwise associated with the movable object that includes a positioning device. Para. [0035], lines 1-2, Zhou teaches a positioning device that is a component configured to operate in a positioning system, such as a global positioning system (GPS)). The proposed combination as well as the motivation for combining the Chui, Zhou, Horvath, Roose, Shin and Noble references presented in the rejection of Claim 1, apply to Claim 10 and are incorporated herein by reference. Thus, the system recited in Claim 10 is met by Chui in view of Zhou further in view of Horvath, Roose, Shin and Noble. Regarding claim 11, Chui in view of Zhou further in view of Horvath, Roose, Shin and Noble teaches the camera system of claim 1, and further teaches wherein the camera is freely mounted to the mobile device by a camera stabilizer (Zhou, Para. [0027], lines 1-2; Fig. 1, Zhou teaches carrier 16 that includes one or more devices configured to hold the payload 14 and/or allow the payload to be adjusted, for example, a gimbal. As seen in Fig. 1, payload 14 includes the camera(s)). The proposed combination as well as the motivation for combining the Chui, Zhou, Horvath, Roose, Shin and Noble references presented in the rejection of Claim 1, apply to Claim 11 and are incorporated herein by reference. Thus, the system recited in Claim 11 is met by Chui in view of Zhou further in view of Horvath, Roose, Shin and Noble. Claim 18 recites a method with steps corresponding to the elements of the system recited in Claims 1. Therefore, the recited steps of this claim are mapped to the proposed combination in the same manner as the corresponding elements in its corresponding system claim. Additionally, the rationale and motivation to combine the Chui, Zhou, Horvath, Roose, Shin and Noble references, presented in rejection of Claim 1, apply to this claim. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Clarence Chui (US 10,791,319 B1) in view of Zhou et al. (US 2020/0357141 A1, Continuation of application No. PCT/CN2018/073866, filed January 23, 2018 – US PGPub used herein as a translation and for mapping purposes) further in view of Horvath et al. (US 10,776,928 B1, filed August 6, 2019), Roose et al. (US 10,635,844 B1), Shin (US 2017/0154219 A1) and Noble (US 2019/0206084 A1), as applied to claims 1, 5, 7-11 and 18 above, and further in view of Muller (US 2015/0029313 A1). Regarding claim 12, Chui in view of Zhou further in view of Horvath, Roose, Shin and Noble teaches the camera system of claim 1, as described above. Although Chui in view of Zhou further in view of Horvath, Roose, Shin and Noble teaches the camera is freely mounted to the mobile device (Zhou, Para. [0027]), they do not explicitly teach “wherein the camera is freely mounted to the mobile device by an elastic mounting”. However, in an analogous field of endeavor, Muller teaches image recording elements positioned against one or several stop edges of the mounting plates using one or several elastic elements provided on the mounting plate which apply a spring force onto the image recording elements that forces them against the stop edges when they are arranged on the mounting plate (Muller, Para. [0031]). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the camera system of Chui in view of Zhou further in view of Horvath, Roose, Shin and Noble with the teaching of Muller by including providing elastic elements on the mounting plate. One having ordinary skill in the art would have been motivated to combine these references because doing so would allow for a simple and cost-efficient stereo camera system, as recognized by Muller. Thus, the claimed invention would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Emma Rose Goebel whose telephone number is (703)756-5582. The examiner can normally be reached Monday - Friday 7:30-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached at (571) 272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Emma Rose Goebel/Examiner, Art Unit 2662 /AMANDEEP SAINI/Supervisory Patent Examiner, Art Unit 2662
Read full office action

Prosecution Timeline

Jun 10, 2022
Application Filed
Nov 06, 2024
Non-Final Rejection — §103, §112
Dec 24, 2024
Interview Requested
Jan 08, 2025
Examiner Interview Summary
Jan 08, 2025
Applicant Interview (Telephonic)
Jan 23, 2025
Response Filed
Feb 11, 2025
Final Rejection — §103, §112
Mar 07, 2025
Interview Requested
Mar 13, 2025
Examiner Interview Summary
Mar 13, 2025
Applicant Interview (Telephonic)
May 21, 2025
Request for Continued Examination
May 22, 2025
Response after Non-Final Action
Jul 17, 2025
Non-Final Rejection — §103, §112
Sep 09, 2025
Interview Requested
Sep 17, 2025
Applicant Interview (Telephonic)
Sep 17, 2025
Examiner Interview Summary
Oct 22, 2025
Response Filed
Nov 17, 2025
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597236
FINE-TUNING JOINT TEXT-IMAGE ENCODERS USING REPROGRAMMING
2y 5m to grant Granted Apr 07, 2026
Patent 12597129
METHOD FOR ANALYZING IMMUNOHISTOCHEMISTRY IMAGES
2y 5m to grant Granted Apr 07, 2026
Patent 12597093
UNDERWATER IMAGE ENHANCEMENT METHOD AND IMAGE PROCESSING SYSTEM USING THE SAME
2y 5m to grant Granted Apr 07, 2026
Patent 12597124
DEBRIS DETERMINATION METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12588885
FAT MASS DERIVATION DEVICE, FAT MASS DERIVATION METHOD, AND FAT MASS DERIVATION PROGRAM
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
53%
Grant Probability
99%
With Interview (+47.0%)
3y 0m
Median Time to Grant
High
PTA Risk
Based on 45 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month