Prosecution Insights
Last updated: April 18, 2026
Application No. 18/369,286

SYSTEMS AND METHODS FOR CONSTRUCTING HIGH RESOLUTION PANORAMIC IMAGERY FOR FEATURE IDENTIFICATION ON ROBOTIC DEVICES

Final Rejection §103
Filed
Sep 18, 2023
Examiner
SOHRABY, PARDIS
Art Unit
2664
Tech Center
2600 — Communications
Assignee
Brain Corporation
OA Round
2 (Final)
79%
Grant Probability
Favorable
3-4
OA Rounds
2y 12m
To Grant
89%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
73 granted / 92 resolved
+17.3% vs TC avg
Moderate +10% lift
Without
With
+9.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 12m
Avg Prosecution
21 currently pending
Career history
113
Total Applications
across all art units

Statute-Specific Performance

§101
14.4%
-25.6% vs TC avg
§103
58.7%
+18.7% vs TC avg
§102
16.2%
-23.8% vs TC avg
§112
9.4%
-30.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 92 resolved cases

Office Action

§103
Detailed Action Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The amended claims and associated applicant arguments/ remarks filed on 11/26/2025 were received and considered. Claims 1, 6, 7, 9, 10, 12, 13, 15, 16, and 18 have been amended. Claims 19 and 20 have been added. Claims 1-20 are pending. Response to Arguments Applicant’s arguments, see Remarks, filed 11/26/2025, with respect to the rejection(s) of claim(s) 1-18 under USC 103 have been fully considered. However, upon further consideration, a new ground(s) of rejection is made in view of Bathala et al. (US 20210354302 A1). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-3, 6-9, 12-15, and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Cier et al. (US 20210385378 A1) referred to as Cier hereinafter and further in view of Ueno (US 20190355140 A1) and Bathala et al. (US 20210354302 A1) referred to as Bathala hereinafter. Regarding claim 1, Cier teaches A robotic system, (“a camera held by or mounted on a user or the user's clothing; a camera mounted on an aerial and/or ground-based drone or other robotic device; etc.)” Cier, para. [0022]) comprising: a memory comprising computer readable instructions stored thereon; (“and the memory may further optionally execute one or more other programs (e.g., a browser 369; a copy of the MIGM system and/or Building Map Viewer system, not shown, such as instead of or in addition to the systems 340-345 on the server computing system(s) 300; etc.).” Cier, para. [0061]) and a processor configured to execute the computer readable instructions to: (“by using the processor(s) 361 to execute software instructions of the system 368 in a manner that configures the processor(s) 361 and mobile device 360 to perform automated operations that implement those described techniques.” Cier, para. [0061]) receive, via a sensor coupled to the robotic system, a first image of an object and a second image of the object as the robotic system moves along a route; (“In particular, FIG. 2A illustrates an example constituent image 250a taken in a northeasterly direction from acquisition location 210B in the living room of house 198 of FIG. 1B, such as for use as a first constituent image in a sequence of constituent images acquired in a full 360° horizontal circle from acquisition location 210B” Cier, para. [0037]) and (“such techniques may include using one or more mobile devices (e.g., a camera having one or more fisheye lenses and mounted on a rotatable tripod or otherwise having an automated rotation mechanism; a camera having one or more fisheye lenses sufficient to capture 360 degrees horizontally without rotation; a smart phone held and moved by a user, such as to rotate the user's body and held smart phone in a 360° circle around a vertical axis; a camera held by or mounted on a user or the user's clothing; a camera mounted on an aerial and/or ground-based drone or other robotic device; etc.) to capture visual data from a sequence of multiple acquisition locations within multiple rooms of a house (or other building)” Cier, para. [0022]) and translation of the robotic system between the first and second image; (“In block 685, the routine further estimates heights of walls in some or all rooms, such as from analysis of images and optionally sizes of known objects in the images, as well as height information about a camera when the images were acquired, and further uses such information to generate a 3D computer model of the building, with the 3D model and the floor plan being associated with each other.” Cier, para. [0088]) and (“concurrently and asynchronously with the constituent image capture, and also concurrently and asynchronously with the data compression and long-term storage and optional decompression and caching, register the cropped constituent images in a first stitching pass by aligning each of the cropped constituent image slices with the previous constituent image (e.g., by using a translational motion model) and by aligning the last selected constituent image with the previous constituent image in a similar manner, while performing additional image analysis activities in a subsequent final stitching pass;” Cier, para. [0017]) align the first and second images to form a panoramic image; (“register the cropped constituent images in a first stitching pass by aligning each of the cropped constituent image slices with the previous constituent image (e.g., by using a translational motion model) and by aligning the last selected constituent image with the previous constituent image in a similar manner, while performing additional image analysis activities in a subsequent final stitching pass;” Cier, para. [0017]) and (“complete the panorama image generation in a final stitching pass by aligning the first and last selected constituent images (e.g., by using optical flow alignment, to complete the 360° loop for the panorama image)” Cier, para. [0018]) and communicate the panoramic image to a server. (“As part of the automated panorama image generation, one or more processing modules 145 of the MICA system in memory 152 of the mobile device may be executed by one or more hardware processors 132 of the mobile device to acquire various constituent images and associated metadata 142 using one or more imaging systems 135 of the mobile device, and to combine at least some of the captured constituent images into one or more generated panorama images 143, which are subsequently transferred over one or more computer networks 170 to the storage 164 on the server computing system(s) 180.” Cier, para. [0027]) However, Cier does not teach determine, via a computer readable map, the distance to the object within the first and second images Ueno teaches determine, via a computer readable map, distance to the object within the first and second images (“The stereo depth determination module 195 can include instructions to determine the stereo depth based on the stereo images (i.e., the first stereo image 165 and the second stereo image 170) and the maximum disparity D.sub.max. The stereo depth determination module 195 can include instructions to determine a distance in pixels between a location of the object (e.g., first object 180) represented in the first image (e.g., first image 165) and a location of the object represented in the second image (e.g., second image 170).” Ueno, para. [0028]) Cier and Ueno are combinable because they are from the same filed of endeavor, image processing. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Cier in light of Ueno’s determining distance to the object. One would have been motivated to do so because it reduces processing time for the cost function. (Ueno, para. [0050]) However, the combination of Cier and Ueno does not teach wherein the translation of the robotic system on the computer readable map is determined based on odometry data of the robotic system. Bathala teaches wherein the translation of the robotic system on the computer readable map is determined based on odometry data of the robotic system. (“determining a second motion of the robot based on data from at least one other sensor and odometry unit; and determining the pose of the sensor based on a motion discrepancy between the first motion and the second motion… determining the image discrepancy by translating and rotating the second image such that the translated and rotated second image matches the first image; ” Bathala, para. [0009]), (“The motion estimation performed by the controller 118 using the two or more images may then be compared to other odometry data yielding an unconventional result in that extrinsic biases of the sensor 302 may be determined based on discrepancies between an estimated motion using the two images and an estimated motion using data from other sensor and/or odometry units.” Bathala, para. [0093]) Cier, Ueno, and Bathala are combinable because they are from the same filed of endeavor, image processing. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Cier and Ueno in light of Bathala’s odometry data. One would have been motivated to do so because it can improve safety and efficiency of robots operating autonomously. (Bathala, para. [0040]) Regarding claim 2, Cier teaches the object comprises a plurality of labels, wherein each of the plurality of labels correspond to a feature of the object, the plurality of labels comprise at least one of a text or computer readable code element. (“Various types of information are illustrated on the 2D floor plan 235k in this example. For example, such types of information may include one or more of the following: room labels added to some or all rooms (e.g., “living room” for the living room); room dimensions added for some or all rooms; visual indications of fixtures or appliances or other built-in features added for some or all rooms;” Cier, para. [0054]) Regarding claim 3, Cier teaches wherein the processor is further configured to execute the computer readable instructions to: determine a bounding box for each label depicted in the first and second images; (“The final stitching phase may further include performing additional cropping by computing a bounding box that excludes/leaves out uncovered (black) regions, such as by using the maximum bounding box that excludes blank areas. The bounding box is extracted by finding the union of the footprints of the warped images (in a similar manner as the step described above for computing the footprint for blending), and computing the largest vertical range with non-blank pixels.” Cier, para. [0051]) and perform the alignment at least in part based on locations of the bounding boxes. (“determining a bounding box for the combination of stitched constituent images that satisfies one or more defined criteria (e.g., the largest bounding box that excludes regions or areas not covered in the selected constituent images) and cropping the combination of stitched constituent images according to such a bounding box, etc.” Cier, para. [0078]) Regarding claim 6, Cier teaches wherein, the computer readable map includes annotations for the object to be scanned, (“after panorama images are generated, they and associated information for them (e.g., annotations, metadata, inter-connection linking information, etc.) may be stored with information 164 on one or more server computing systems 180 for later use. Such generated panorama information 164 may further be included as part of captured building interior information 165 that is subsequently used by an MIGM (Mapping Information Generation Manager) system 160 executing on one or more server computing systems 180 (whether on the same or different server computing systems on which the information 164 is stored) to generate corresponding building floor plans and/or other related mapping information 155.” Cier, para. [0026]) and the panoramic image begins and ends proximate to edges of the object on the computer readable map. (“For loop closure, the concatenated transforms for constituent images are updated such that the concatenated transform for the last constituent image is consistent with H.sub.N−1,0 by first computing the errors in transforming the corners of the image in the last constituent image (represented as dA, dB, dC and dD), with the shifts in the transformed corner for each concatenated transform for constituent images being adjusted by dA/(N−1), dB/(N−1), dC/(N−1), and dD/(N−1), where N is the quantity of constituent images and/or image angular slots.” Cier, para. [0051]) Regarding claim 7, refer to the explanation of claim 1. Regarding claim 8, refer to the explanation of claim 2. Regarding claim 9, refer to the explanation of claim 3. Regarding claim 12, refer to the explanation of claim 6. Regarding claim 13, Cier teaches A non-transitory computer readable medium comprising computer readable instructions stored there that when executed by at least one processor (“Some or all of the modules, systems and data structures may also be stored (e.g., as software instructions or structured data) on a non-transitory computer-readable storage mediums,” Cier, para. [0063]) Regarding rest of claim 13, refer to the explanation of claim 1. Regarding claim 14, refer to the explanation of claim 2. Regarding claim 15, refer to the explanation of claim 3. Regarding claim 18, refer to the explanation of claim 6. Regarding claim 19, Bathala teaches wherein the odometry data represents physical movement of the robotic system along the route. (“Using the method 600 illustrated in FIG. 6 above, the robot 102 may, due to the extrinsic bias of the sensor 302, estimate its movement by route 704 and its final position at location 708, however the robot 102 may have actually navigated along the route 702 properly as illustrated, wherein the actual movement of the robot 102 along the route 702 may be determined based on data from a plurality of well calibrated odometry and sensor units (excluding the sensor 302).” Bathala, para. [0094]) Regarding claim 20, Cier teaches wherein the first and second images are aligned to form the panoramic image such that features of the object are neither duplicated nor omitted in the panoramic image. (“FIG. 2H continues the examples of FIGS. 2A-2G, and illustrates information 290h that shows a sequence of constituent images that have been selected and cropped for use in being combined as part of generating a panorama image from acquisition location 210B.” Cier, para. [0044]), as shown in fig. 2H, no feature is being duplicated or omitted. Claim(s) 4, 5, 10, 11, 16, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Cier, Ueno, and Bathala as mentioned above and further in view of Garg et al. (US 20150324662 A1) referred to as Garg hereinafter. Regarding claim 4, the combination of Cier, Ueno, and Bathala does not teach wherein the processor is further configured to execute the computer readable instructions to, determine an image quality matrix based on the level of contrast detected within bounding boxes of labels within a plurality of images. Garg teaches wherein the processor is further configured execute the computer readable instructions to, determine an image quality matrix based on the level of contrast detected within bounding boxes of labels within a plurality of images. (“edges in an image have an intrinsic scale most effectively analyzed at a fine scale of resolution, while non-edged regions, such as regions of uniform material, can be accurately analyzed at relatively coarse scales of resolution. Thus, an image is divided by edge and non-edge regions, segregating the edge regions at a fine scale of resolution, and the remaining non-edge regions at a relatively coarse scale of resolution.” Garg, para. [0053]) and (“In steps 1304 and 1306, the CPU 12 operates to calculate the average color for the pixels of the candidate 1-D token, and compares that color to pixels of a pre-selected neighborhood surrounding the candidate 1-D token, to determine the number of pixels in the neighborhood, Ns, that match the color of the candidate 1-D token.” Garg, para. [0077]) Cier, Ueno, Bathala, and Garg are combinable because they are from the same filed of endeavor, image processing. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Cier, Ueno, and Bathala in light of Garg’s determining image quality. One would have been motivated to do so because it can increase accuracy in a solve based upon color change. (Garg, para. [0061]) Regarding claim 5, Garg teaches wherein the processor is further configured to execute the computer readable instructions to, adjust color values of pixels depicting the label within the bounding box of either the first image or the second image based on the color values of the label in the first and second images and the image quality matrix. (“a color correct gamma correction can be achieved by performing an intensity adjustment on the illumination image, and merging the intensity adjusted illumination image with the corresponding material image, for a color correct, intensity adjusted output image.” Garg, para. [0302]) Regarding claim 10, refer to the explanation of claim 4. Regarding claim 11, refer to the explanation of claim 5. Regarding claim 16, refer to the explanation of claim 4. Regarding claim 17, refer to the explanation of claim 5. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PARDIS SOHRABY whose telephone number is (571)270-0809. The examiner can normally be reached Monday - Friday 9 am till 6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Mehmood can be reached at (571) 272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PARDIS SOHRABY/ Examiner, Art Unit 2664 /JENNIFER MEHMOOD/ Supervisory Patent Examiner, Art Unit 2664
Read full office action

Prosecution Timeline

Sep 18, 2023
Application Filed
Aug 23, 2025
Non-Final Rejection — §103
Nov 26, 2025
Response Filed
Jan 06, 2026
Final Rejection — §103
Apr 07, 2026
Request for Continued Examination
Apr 12, 2026
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592015
PREDICTING SCATTERED SIGNAL OF X-RAY, AND CORRECTING SCATTERED BEAM
2y 5m to grant Granted Mar 31, 2026
Patent 12573236
FACIAL EXPRESSION-BASED DETECTION METHOD FOR DEEPFAKE BY GENERATIVE ARTIFICIAL INTELLIGENCE (AI)
2y 5m to grant Granted Mar 10, 2026
Patent 12567240
OPEN VOCABULARY INSTANCE SEGMENTATION WITH NOISE ESTIMATION AND ROBUST STUDENT
2y 5m to grant Granted Mar 03, 2026
Patent 12555378
IMAGE ANALYSIS SYSTEM, IMAGE ANALYSIS METHOD, AND PROGRAM
2y 5m to grant Granted Feb 17, 2026
Patent 12536666
Computer Software Module Arrangement, a Circuitry Arrangement, an Arrangement and a Method for Improved Image Processing
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
79%
Grant Probability
89%
With Interview (+9.7%)
2y 12m
Median Time to Grant
Moderate
PTA Risk
Based on 92 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month