Prosecution Insights
Last updated: April 19, 2026
Application No. 18/560,375

Method to Automatically Calibrate Cameras and Generate Maps

Non-Final OA §102§103§112
Filed
Nov 10, 2023
Examiner
DEPALMA, CAROLINE ELIZABETH
Art Unit
2675
Tech Center
2600 — Communications
Assignee
Ssy AI Ltd.
OA Round
1 (Non-Final)
88%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
37 granted / 42 resolved
+26.1% vs TC avg
Strong +16% interview lift
Without
With
+15.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
16 currently pending
Career history
58
Total Applications
across all art units

Statute-Specific Performance

§101
18.4%
-21.6% vs TC avg
§103
29.9%
-10.1% vs TC avg
§102
20.5%
-19.5% vs TC avg
§112
26.7%
-13.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 42 resolved cases

Office Action

§102 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claim 4 is objected to because of the following informalities: Claim 4 recites the limitation “determining sizes of pixels at different location of the image”, which is unclear. The term “sizes of pixels” seems to refer to the real-world area to which the pixel area in the image corresponds (See specification e.g. [0102], [00119]). Examiner suggests amending this term in the claim to increase clarity. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 3, 12 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim recites the limitations “the part of the object contacting the ground” and “the ground” in lines 2-3. There is insufficient antecedent basis for these limitations in the claim. Claim 12 recites the limitation "the room" in line 2. There is insufficient antecedent basis for this limitation in the claim. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-3, 7-10, 12-13, 15 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Murray (US 20200226361 A1). Regarding claim 1, Murray discloses a method of calibrating a video camera capturing a space ([0008], [0013] mapping an environment of a camera to determine the relationship between a camera position and the floor plane using only camera intrinsic information) comprising: a. receiving images from the camera (Fig. 2, 4; [0009], [0021], [0038]: receiving frames (i.e. images) from a camera); b. identifying objects within the images (Fig. 4; [0038] identify and person or other object in the frames); c. tracking the identified objects as they move ([0009], [0014], [0016]: tracking the person (i.e. objects) as they move throughout the environment); d. using changes in size of the tracked objects as they move to infer perspective in the video camera ([0019]-[0020] using pixel distances of the person's body (i.e. size of the tracked person/object) to determine distance moved by the person, inferring a floor plane in the images captured by the camera based on the movement of the person; [0037], [0040] other outputs may include a camera orientation, location, or angle); and e. using the inferred perspective to create a mapping model to convert pixels in the received images to a metrical map of the space (Fig. 3, 4; [0037], [0040]: a 3D map is generated based on the primary axis (i.e. size) and endpoints of the tracked object across frames including an angle of orientation of the camera to the 3D floor plane included in the map). Regarding claim 2, Murray discloses a method of creating a map from a video camera capturing a space ([0008] system and methods for mapping an environment observable via a camera without knowledge of camera parameters) comprising: a. receiving images from the camera (Fig. 2, 4; [0009], [0021], [0038]: receiving frames (i.e. images) from a camera); b. identifying objects within the images (Fig. 4; [0038] identify and person or other object in the frames); c. tracking the identified objects as they move ([0009], [0014], [0016]: tracking the person (i.e. objects) as they move throughout the environment); d. using changes in size of the tracked objects to infer perspective in the images ([0019]-[0020] using pixel distances of the person's body (i.e. size of the tracked person/object) to determine distance moved by the person, inferring a floor plane in the images captured by the camera based on the movement of the person; [0037], [0040] other outputs may include a camera orientation, location, or angle); e. determining parts of the objects that contact a floor to determine locations of ground pixels in the image ([0018] determining parts of the person that is likely to be next to/on top of a floor pixel); and f. creating a 2D metrical map of the space from the inferred perspective and the locations of ground pixels (Fig. 3, [0020] a map may be 2D and based on the inferred floor from the skeleton information of the person; [0037] the map which may be 2D is based on the skeleton information and inferred perspective based on movement of the person/object). Regarding claim 3, Murray discloses the method of claim 1 as applied above. Murray further discloses further comprising using pose estimation to determine a leg as the part of the object contacting the ground ([0018] determining a lower limb of a person (i.e. leg) as being next to/on top of a floor (i.e. ground)). Regarding claim 7, Murray discloses the method of claim 1 as applied above. Murray further discloses wherein a segmentation model is used to identify objects ([0009], [0016] identifying pixels as constituting types of objects and/or types of regions of objects (e.g. person, person's leg or torso) i.e. segmenting objects and portions of objects). Regarding claim 8, Murray discloses the method of claim 1 as applied above. Murray further discloses wherein a semantic segmentation model is used to identify objects ([0009], [0016] identifying pixels as constituting types of objects and/or types of regions of objects (e.g. person, person's leg or torso) i.e. segmenting objects and portions of objects). Regarding claim 9, Murray discloses the method of claim 1 as applied above. Murray further discloses further comprising building up a statistical inference model of a height of the objects ([0019], [0028]: using projective geometry (i.e. statistical inference) model of a height of the person to determine the distance moved by the person ). Regarding claim 10, Murray discloses the method of claim 1 as applied above. Murray further discloses further comprising classifying objects as moving or stationary objects ([0020] identifying movement of the person or other objects; [0011], [0037] identifying furniture or other locations which the person interacts with when sitting or lying down (i.e. classifying objects as being stationary based on whether the person is stationary when interacting with the object)). Regarding claim 12, Murray discloses the method of claim 1 as applied above. Murray further discloses further comprising defining a pose of the camera with respect to the room from the changes in size of the tracked objects ([0037], [0040] other outputs may include a camera orientation, location, or angle (i.e. pose) relative to the identified floor/ground based on the tracked objects). Regarding claim 13, Murray discloses the method of claim 1 as applied above. Murray further discloses wherein identifying and tracking objects is performed during a calibration period ([0013] determining a relationship between a camera and a floor (i.e. calibration) using only camera intrinsics information; [0037] determining the camera position/angle based on object tracking and identification). Regarding claim 15, Murray discloses the method of claim 1 as applied above. Murray further discloses further comprising continuing to track objects using the mapping model to compute movement metrics for a given object ([0020], [0037] generating a heat map of the person's movement showing the person's locations and activities based on the generated map). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 5-6, 11, 14, 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Murray (US 20200226361 A1) in view of Nielsen (US 20160379074 A1). Regarding claim 5, Murray discloses the method of claim 1 as applied above. Murray fails to disclose further comprising creating corrected images from the received images by correcting for the inferred perspective and distortion effects of the lens. Nielsen, in a related system from the same field of endeavor of object tracking and generating a map of a space based on video camera data (Abstract, [0012], [0087]), discloses further comprising creating corrected images from the received images by correcting for the inferred perspective and distortion effects of the lens (Fig. 68A, 68B, 70; [0729]-[0731]: camera distortion effects and inferred perspective in the image in Fig. 68 are corrected and the corrected image displayed in Fig. 70). It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to combine Nielsen with Murray and provide images that have been corrected for camera distortion and inferred perspective, as disclosed by Nielsen, as part of a method of calibrating a video camera capturing a space, as disclosed by Murray, for the purpose of object tracking and measurements with high accuracy and robustness (See Nielsen [0004]). Regarding claim 6, Murray discloses the method of claim 1 as applied above. Murray fails to disclose wherein objects have fiduciary markings to uniquely identify them. Nielsen, in a related system from the same field of endeavor of object tracking and generating a map of a space based on video camera data (Abstract, [0012], [0087]), discloses wherein objects have fiduciary markings to uniquely identify them ([0024] the objects to be tracked may have an identification mark which may be recognized by the computer vision program). It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to combine Nielsen with Murray and uniquely identify objects to be tracked with fiduciary markings, as disclosed by Nielsen, as part of a method of calibrating a video camera capturing a space, as disclosed by Murray, for the purpose of object tracking and measurements with high accuracy and robustness (See Nielsen [0004]). Regarding claim 11, Murray discloses the method of claim 1 as applied above. Murray fails to disclose further comprising inputting floor plans to constrain the map creation and register identified objects of the map to features of the floor plan Nielsen, in a related system from the same field of endeavor of object tracking and generating a map of a space based on video camera data (Abstract, [0012], [0087]), discloses further comprising inputting floor plans to constrain the map creation and register identified objects of the map to features of the floor plan ([0486] applying constraints of room dimensions and locations of obstructions as features of a floor plan). It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to combine Nielsen with Murray and constrain map with floor plans and registered identified objects, as disclosed by Nielsen, as part of a method of calibrating a video camera capturing a space, as disclosed by Murray, for the purpose of object tracking and measurements with high accuracy and robustness (See Nielsen [0004]). Regarding claim 14, Murray discloses the method of claim 1 as applied above. Murray fails to disclose wherein at least some of the objects carry an IMU to identify that object and its dimensions. Nielsen, in a related system from the same field of endeavor of object tracking and generating a map of a space based on video camera data (Abstract, [0012], [0087]), discloses wherein at least some of the objects carry an IMU to identify that object and its dimensions ([0289], [0301]-[0303] objects may include an IMU to detect motion and detect the object; [0441]-[0442] the IMU output may provide additional information related to the size of the object (i.e. dimensions)). It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to combine Nielsen with Murray and include an IMU to identify objects and dimensions, as disclosed by Nielsen, as part of a method of calibrating a video camera capturing a space, as disclosed by Murray, for the purpose of object tracking and measurements with high accuracy and robustness (See Nielsen [0004]). Regarding claim 16, Murray discloses the method of claim 1 as applied above. Murray further discloses one or more video cameras capturing a space (Fig. 1, 4; [0008], [0038]: camera capable of capturing video data, capturing a camera environment); computer operatively connected to receive video from the one or more video cameras (Fig. 5, [0041], [0049]: a device including a processor (i.e. computer) to receiving a series of frames of video data captured by the camera); and a memory storing instructions, which when executed by the computer ([0049] memory storing instructions causing a processor to execute a method), cause the computer to carry out the method of claim 1 (as applied above). Murray fails to disclose a database of objects expected to be in that space and their dimensions. Nielsen, in a related system from the same field of endeavor of object tracking and generating a map of a space based on video camera data (Abstract, [0012], [0087]), discloses a database of objects expected to be in that space and their dimensions ([0284] a database includes identities of mobile objects to identify the objects in the space; [0486] may include pre-stored dimensions of objects or the space). It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to combine Nielsen with Murray and include a database of objects expected to be in the space and their dimensions, as disclosed by Murray, as part of a method of calibrating a video camera capturing a space, as disclosed by Murray, for the purpose of object tracking and measurements with high accuracy and robustness (See Nielsen [0004]). Allowable Subject Matter Claim 4 would be allowable if rewritten to overcome the objection set forth in this Office action and to include all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Regarding claim 4, Murray discloses the method of claim 1 as applied above. However, neither Murray nor any obvious combination of the closest known prior art discloses further comprising determining sizes of pixels at different location of the image. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Motoyama (US 20200209401 A1) discloses generating a map of a camera environment including semantic segmentation and determining the distance of an object to the camera and whether the object is in contact with a ground surface. Fotland (US 20150055821) discloses object tracking which may include camera calibration and generating a map of the environment. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CAROLINE DEPALMA whose telephone number is (571)270-0769. The examiner can normally be reached Mon-Thurs 7:00am-4pm Eastern Time. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Moyer can be reached at 571-272-9523. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CAROLINE E. DEPALMA/Examiner, Art Unit 2675 /SJ Park/Primary Examiner, Art Unit 2675
Read full office action

Prosecution Timeline

Nov 10, 2023
Application Filed
Jun 02, 2025
Response after Non-Final Action
Dec 08, 2025
Non-Final Rejection — §102, §103, §112
Feb 02, 2026
Response after Non-Final Action
Feb 02, 2026
Response Filed

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602777
APPARATUS AND METHOD FOR QUANTITATIVE ASSESSMENT OF MEDICAL IMAGES FOR DIAGNOSIS OF CHRONIC OBSTRUCTIVE PULMONARY DISEASE
2y 5m to grant Granted Apr 14, 2026
Patent 12586409
DETECTING EMOTIONAL STATE OF A USER BASED ON FACIAL APPEARANCE AND VISUAL PERCEPTION INFORMATION
2y 5m to grant Granted Mar 24, 2026
Patent 12586246
SYSTEM AND METHOD FOR VICARIOUS CALIBRATION OF OPTICAL DATA FROM SATELLITE SENSORS
2y 5m to grant Granted Mar 24, 2026
Patent 12573046
METHODS AND SYSTEMS FOR ANALYZING BRAIN LESIONS FOR THE DIAGNOSIS OF MULTIPLE SCLEROSIS
2y 5m to grant Granted Mar 10, 2026
Patent 12567226
METHOD AND DEVICE OF ACQUIRING FEATURE INFORMATION OF DETECTED OBJECT, APPARATUS AND MEDIUM
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
88%
Grant Probability
99%
With Interview (+15.6%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 42 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month