Prosecution Insights
Last updated: April 19, 2026
Application No. 18/259,639

INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD

Non-Final OA §102§103
Filed
Jun 28, 2023
Examiner
DHOOGE, DEVIN J
Art Unit
2677
Tech Center
2600 — Communications
Assignee
Omron Corporation
OA Round
2 (Non-Final)
70%
Grant Probability
Favorable
2-3
OA Rounds
3y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
50 granted / 71 resolved
+8.4% vs TC avg
Strong +43% interview lift
Without
With
+42.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
48 currently pending
Career history
119
Total Applications
across all art units

Statute-Specific Performance

§101
8.2%
-31.8% vs TC avg
§103
49.4%
+9.4% vs TC avg
§102
35.8%
-4.2% vs TC avg
§112
5.7%
-34.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 71 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This communication is filed in response to the action filed on 12/23/2025. Claims 1-19 are pending. Response to Arguments Applicant’s arguments filed on 12/23/2025 on pages 2-5, under REMARKS with respect to 35 U.S.C. 102 and 103 claim rejections to claims 1-19 have been fully considered and are persuasive. The rejections to the claims have been withdrawn. However, upon further consideration, a new ground of rejection is made in view of US 2021/0271923 A1. Information Disclosure Statement The information disclosure statement (IDS) filed on 12/23/2025 has been considered. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-2, 5, and 18-19 are rejected under 35 § U.S.C. 102(a)(2) as being anticipated by US 2021/0271923 A1 to YU et al (hereinafter “YU”). As per claim 1, YU discloses an information processing apparatus (a computing system and method of operation relating to processing image information/data; title; abstract; figs 1-2, 6-7; paragraphs [0015-0017], [0043-0049]), comprising: a detector configured to detect a movable object in a frame image of a video (the system comprising a similarity detector component which detects similarity/matching of objects which are movable in subsequently captured continuous image frames of a video; abstract; figs 1, 6-7; paragraphs [0043-0044], [0069-0073], [0078]); a calculator configured to calculate a confidence of the detected movable object being a predetermined object (the computing system includes a first still image object detector to receive a first frame of the plurality of video frames and calculate localization information and confidence information for each potential object patch in the first frame for each respective object and applies the patches in subsequent frames based on a confidence score to find paired patches to their matching object; abstract; figs 1, 6-7; paragraphs [0043-0044], [0065], [0069-0073]); and a detection range determiner configured to determine a detection range for a first movable object detected in a first frame based on a confidence of the first movable object calculated with a range circumscribing the first movable object (the detection module/systems of the computing systems include a pair of still image object detectors 106A and 106B to detect objects in frames 104A and 104B respectively and are received from video input 102, the images are analyzed for objects and then objects are paired with the respective bounding boxes representing the image patch in the second frame 104B and bounding boxes are associated using D=lldf- 1 -d)b, which is the Euclidean distance (acting as the range which includes the first object and bounding box) of default bounding boxes between two adjacent frames, Y is the label whether two boxes are paired or not, threshold is the threshold of two boxes which are unpaired (acting as a limit to the range of detection), and theta is the proportion of paired boxes and unpaired boxes, and the still image object detector 206A further includes the capability to calculate localization information 210 including one or more confidence scores 212 for each potential object and related object patch in the first frame; figs 1-2, and 4; paragraphs [0015], [0019], [0023-0024], [0039-0049]) and on a confidence of the first movable object in the first frame calculated with a detection range for a second movable object detected in a second frame preceding the first frame (both image object detectors 206/106 A & B are configured to detect objects in a frame and to calculate a confidence score these scores are then used based on bounding box information to position said bounding box on a center pixel and to pair the respective object bounding box with the object in the preceding frame in a continuous stream of image frames provided by an input video which provides frames in a continuous manner based on the frame rate and would provide a second frame preceding a first; abstract; figs 6-7; paragraphs [0023-0025], [0043-0044], [0065], [0069-0073], [0078]), and to record the determined detection range into a recorder (the computing system is adapted to store and save the bounding box information to the system wherein the bounding box provides a range in the image for the object; figs 1, 6-8; paragraphs [0024], [0062], [0072-0076], [0095]; claim 18). As per claim 2, YU discloses the information processing apparatus according to claim 1, further comprising: a movable-object determiner configured to determine, selectively from a plurality of movable objects detected in the second frame, the second movable object being a same object as the first movable object (apparatus includes a first still image object detector to receive a first frame of the plurality of video frames and calculate localization information and confidence information for each potential object patch in the first frame, the localization information includes predicted bounding box coordinates, and confidence information includes confidence scores for one or more object types/classifications and includes a second still image object detector to receive a second adjacent frame of the plurality of video frames adjacent to the first frame and calculate localization information and confidence information for each potential object patch in the adjacent frame, where, an adjacent frame refers frame that has been consecutively captured to another frame the system including a similarity detector trained to detect paired patches between the first frame and the adjacent frame based on a comparison of the detected potential object patches (determining the second movable object being the same as the first), the system further includes an enhancer to modify a prediction result for a paired patch in the adjacent frame to a prediction result of a corresponding paired patch in the first frame including a higher confidence score than the prediction result of the paired patch in the adjacent frame; abstract; figs 1-2, 6-7; paragraphs [0012], [0023-0024], [0043-0044], [0065], [0069-0073], [0078]). As per claim 5, YU discloses the information processing apparatus according to claim 2,wherein the movable-object determiner determines the second movable object being the same object as the first movable object through matching between the first movable object and each of the plurality of movable objects detected in the second frame using a machine learning-based matching algorithm (the method of claims 2 is performed using a CNN learning algorithm in order to control and adjust algorithms related to the computer ran/implemented posterior proposal method to generate new patches for each sample first, and perform confidence score prediction based on new patches and similarity detection based on the new paired patches using, a selective search algorithm which is utilized to perform patch pair proposal (matching) for a two-stage object detector of the computing system; figs 1-2, 6-7; paragraphs [0012], [0023-0024], [0041], [0043-0044], [0065], [0069-0073], [0078]). As per claim 18, YU discloses an information processing method implementable with a computer (a computing system and method of operation relating to processing image information/data; title; abstract; figs 1-2, 6-7; paragraphs [0015-0017], [0043-0049]), the method comprising: detecting a first movable object in a first frame included in a video (the system comprising a similarity detector component which detects similarity/matching of objects which are movable in subsequently captured continuous image frames of a video; abstract; figs 1, 6-7; paragraphs [0043-0044], [0069-0073], [0078]); calculating a confidence of the first movable object being a predetermined object by using a range circumscribing the first movable object and using a detection range for a second movable object detected in a second frame preceding the first frame (the computing system includes a first still image object detector to receive a first frame of the plurality of video frames and calculate localization information and confidence information for each potential object patch in the first frame for each respective object and applies the patches in subsequent frames based on a confidence score to find paired patches to their matching object; abstract; figs 1, 6-7; paragraphs [0043-0044], [0065], [0069-0073]), the detection range being recorded in a recorder (the detection module/systems of the computing systems include a pair of still image object detectors 106A and 106B to detect objects in frames 104A and 104B respectively and are received from video input 102, the images are analyzed for objects and then objects are paired with the respective bounding boxes representing the image patch in the second frame 104B and bounding boxes are associated using D=lldf- 1 -d)b, which is the Euclidean distance (acting as the range which includes the first object and bounding box) of default bounding boxes between two adjacent frames, Y is the label whether two boxes are paired or not, threshold is the threshold of two boxes which are unpaired (acting as a limit to the range of detection), and theta is the proportion of paired boxes and unpaired boxes, and the still image object detector 206A further includes the capability to calculate localization information 210 including one or more confidence scores 212 for each potential object and related object patch in the first frame; figs 1-2, and 4; paragraphs [0015], [0019], [0023-0024], [0039-0049); and determining, based on a confidence of the first movable object calculated with the range circumscribing the first movable object (the detection module/systems of the computing systems include a pair of still image object detectors 106A and 106B to detect objects in frames 104A and 104B respectively and are received from video input 102, the images are analyzed for objects and then objects are paired with the respective bounding boxes representing the image patch in the second frame 104B and bounding boxes are associated using D=lldf- 1 -d)b, which is the Euclidean distance (acting as the range which includes the first object and bounding box) of default bounding boxes between two adjacent frames, Y is the label whether two boxes are paired or not, threshold is the threshold of two boxes which are unpaired (acting as a limit to the range of detection), and theta is the proportion of paired boxes and unpaired boxes, and the still image object detector 206A further includes the capability to calculate localization information 210 including one or more confidence scores 212 for each potential object and related object patch in the first frame; figs 1-2, and 4; paragraphs [0015], [0019], [0023-0024], [0039-0049]) and on a confidence of the first movable object in the first frame calculated with the detection range for the second movable object (both image object detectors 206/106 A & B are configured to detect objects in a frame and to calculate a confidence score these scores are then used based on bounding box information to position said bounding box on a center pixel and to pair the respective object bounding box with the object in the preceding frame in a continuous stream of image frames provided by an input video which provides frames in a continuous manner based on the frame rate and would provide a second frame preceding a first; abstract; figs 6-7; paragraphs [0023-0025], [0043-0044], [0065], [0069-0073], [0078]), a detection range for the first movable object and recording the determined detection range into the recorder (the computing system is adapted to store and save the bounding box information to the system wherein the bounding box provides a range in the image for the object; figs 1, 6-8; paragraphs [0024], [0062], [0072-0076], [0095]; claim 18). As per claim 19, YU discloses a non-transitory computer readable medium storing a program for causing a computer to perform operations included in the information processing method according to claim 18 (the computing system includes a memory component such as but not limited to a RAM memory device to store programs and instructions related to the described method of operation above; fig 6; paragraphs [0055-0057]). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or non-obviousness. Claims 3-4 are rejected under 35 § U.S.C. 103 as being obvious over US 2021/0271923 A1 to YU et al (hereinafter “YU”) in view of US 2018/0330509 A1 to WATANABE et al. (hereinafter “WATANABE”). As per claim 3, YU discloses the information processing apparatus according to claim 2. YU fails to disclose wherein the movable-object determiner determines the second movable object being the same object as the first movable object based on a distance between a center of the range circumscribing the first movable object and a center of a detection range for each of the plurality of movable objects detected in the second frame. WATANABE discloses wherein the movable-object determiner determines the second movable object being the same object as the first movable object based on a distance between a center of the range circumscribing the first movable object and a center of a detection range for each of the plurality of movable objects detected in the second frame (the computing system includes a 3D position determiner 143 is adapted to determine a three-dimensional position of an object, using the distance to the object corresponding to the detected object region, and the distance on an image between the center of the parallax image and the center of the object region on the parallax image are identified for each object receiving a bounding box/region; figs 2-3, 10; paragraphs [0059-0060], [0067], [0089], [0099], [0127-0129]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify YU to have determines the second movable object being the same object as the first movable object based on a distance between a center of the range circumscribing the first movable object and a center of a detection range for each of the plurality of movable objects detected in the second frame of WATANABE reference. The Suggestion/motivation for doing so would have been to provide the ability to determine whether the identification object present in front of the reference vehicle is a pedestrian, a bicycle or a motorcycle, a compact car, a truck or the like may be distinguished and identified and it is ideal to know the object in front of the vehicle in order to drive safely and avoid the object if needed in the safest possible manner added safety features and is suggested at paragraph [0127] of WATANABE. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine WATANABE with YU to obtain the invention as specified in claim 3. As per claim 4, YU discloses the information processing apparatus according to claim 2. YU fails to disclose wherein the movable-object determiner determines the second movable object being the same object as the first movable object based on a ratio of an overlapping area between the range circumscribing the first movable object and the detection range for each of the plurality of movable objects detected in the second frame to an area covered by the range circumscribing the first movable object and the detection range. WATANABE discloses wherein the movable-object determiner determines the second movable object being the same object as the first movable object based on a ratio of an overlapping area between the range circumscribing the first movable object and the detection range for each of the plurality of movable objects detected in the second frame to an area covered by the range circumscribing the first movable object and the detection range (tracking unit 1442 performs the following calculation methods: tracking unit 144 determines whether the object detected in the current frame satisfies the tracking continuation condition based on the real U map or the like generated by the real U map generator 138 at step S1301, such that the tracking continuation condition includes the following, the actual distance between the position of the object predicted from the previous frame and the position of the object generated based on the current frame is within a predetermined range e.g., 2 meters, and, when K represents a region including an object detected in the parallax image, L represents a region including the object in the parallax image that is predicted from the previous frame, and M represents a region where the region L and the region K overlaps, the ratio of an area of M (ratio of overlap of two regions acting as bounding boxes for object detection) to an area of K and L is greater than the predetermined threshold value S e.g., 0.5 and for example is provided by, the area of M/{(the area of K + the area of L)/2}>S; paragraphs [0122-0124], [0142], [0167]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify YU to have determines the second movable object being the same object as the first movable object based on a ratio of an overlapping area between the range circumscribing the first movable object and the detection range for each of the plurality of movable objects detected in the second frame to an area covered by the range circumscribing the first movable object and the detection range of WATANABE reference. The Suggestion/motivation for doing so would have been to provide M as the ration of overlap of regions K and L representing object bounding regions, and is provided in an equation to determine tracking continuation and if the condition S is met tracking is continued and if the condition S is not met tracking is stopped which provides the system a stop condition in order to save computing resources if tracking is no longer sustainable as suggested by paragraph [0166-0170] of WATANABE. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine WATANABE with YU to obtain the invention as specified in claim 4. Claims 6-15 are rejected under 35 § U.S.C. 103 as being obvious over US 2021/0271923 A1 to YU et al (hereinafter “YU”) in view of US 11.069,682 B1 to EBRAHIMI AFROUZI et al. (hereinafter “EBRAHIMI”). As per claim 6, YU discloses the information processing apparatus according to claim 2. YU fails to disclose wherein the movable object determiner determines, selectively from movable objects detected in each of a plurality of frames preceding the first frame, a movable object being the same object as the first movable object in each of the plurality of frames, and in response to, of confidences of the first movable object calculated with detection ranges for movable objects determined to be the same object as the first movable object in the plurality of frames, a greatest confidence being greater than the confidence of the first movable object calculated with the range circumscribing the first movable object, the detection range determiner determines a detection range with the greatest confidence as the detection range for the first movable object. EBRAHIMI discloses wherein the movable object determiner determines, selectively from movable objects detected in each of a plurality of frames preceding the first frame, a movable object being the same object as the first movable object in each of the plurality of frames, and in response to, of confidences of the first movable object calculated with detection ranges for movable objects determined to be the same object as the first movable object in the plurality of frames, a greatest confidence being greater than the confidence of the first movable object calculated with the range circumscribing the first movable object, the detection range determiner determines a detection range with the greatest confidence as the detection range for the first movable object (the range is defined to include the object regions that include point clouds have a confidence score of the points belonging to the object, the object is identified using thresholding methods on a pixel/point of the point cloud and estimating its similarity as a confidence score and matching pixels of image frames based on said confidence wherein the greater the confidence/similarity score of two similar points in consecutive frames indicates a point including an object; column 23, lines 1-54; column 35, line 48 – column 36, line 3; column 110, line 60-column 111, line 50; column 126, lines 22-54; column 135, lines 10-26; column 165, lines 35-61). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify YU to have a greatest confidence being greater than the confidence of the first movable object calculated with the range circumscribing the first movable object, the detection range determiner determines a detection range with the greatest confidence as the detection range for the first movable object of EBRAHIMI reference. The Suggestion/motivation for doing so would have been to provide the ability of classifying objects in a field of view as being moveable objects upon detecting a difference of greater than a threshold size related to the confidence score of the pixels belonging to the movable objects as suggested by EBRAHIMI column 40, lines 1-18. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine EBRAHIMI with YU to obtain the invention as specified in claim 6. As per claim 7, YU discloses the information processing apparatus according to claim 1. YU fails to disclose wherein in response to the confidence of the first movable object calculated with the range circumscribing the first movable object being greater than a first threshold, the detection range determiner determines the range circumscribing the first movable object as the detection range for the first movable object. EBRAHIMI discloses wherein in response to the confidence of the first movable object calculated with the range circumscribing the first movable object being greater than a first threshold, the detection range determiner determines the range circumscribing the first movable object as the detection range for the first movable object (where detection ranges overlap a confidence score is provided to be determined for overlap determinations, such as, based on an amount of overlap and aggregate amount of disagreement between depth vectors in the area of overlap in the different fields of view, and the Bayesian techniques down-weight updates to priors based on decreases in the amount of confidence and includes an area of overlap is identified, as a bounding box of pixel positions and a threshold angle of a vertical plane at which overlap starts in each field of view, the system constructs a larger field of view by combining the two fields of view using the overlapping depth measurements as attachment point into a shared coordinate system of a shared origin, values of weights / thresholds are determined based on various factors, such as the degree of similarity between depth measurements recorded from separate fields of view, the quality of the measurements, the weight of neighboring depth measurements, or the number of neighboring depth measurements with high weight; column 35, line 51- column 36, line 44). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify YU to have wherein in response to the confidence of the first movable object calculated with the range circumscribing the first movable object being greater than a first threshold, the detection range determiner determines the range circumscribing the first movable object as the detection range for the first movable object of EBRAHIMI reference. The Suggestion/motivation for doing so would have been to provide weighted adjustable parameters and thresholds in order to be able to adjust the system parameters and thresholding limits based on the specific scenario being observed in the images in order to provide optimal parameter settings for the highest confidence/similarity scores in object detection as suggested at column 36, lines 4-45 of EBRAHIMI. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine EBRAHIMI with YU to obtain the invention as specified in claim 7. As per claim 8, YU discloses the information processing apparatus according to claim 1. YU fails to disclose wherein in response to the confidence of the first movable object calculated with the detection range for the second movable object being greater than the confidence of the first movable object calculated with the range circumscribing the first movable object, the detection range determiner determines the detection range for the second movable object as the detection range for the first movable object. EBRAHIMI discloses wherein in response to the confidence of the first movable object calculated with the detection range for the second movable object being greater than the confidence of the first movable object calculated with the range circumscribing the first movable object, the detection range determiner determines the detection range for the second movable object as the detection range for the first movable object (the system is adapted to perform the object detection methods on multiple objects in the same frame including both a first and second object wherein the detectors of the system detect a first and a second object present in the FOY of the sensor, each of which is positioned at a different distance, may produce a different phase shift that may be associated with their respective distances and bounding boxes/ranges; column 12, lines 10-48; column 23, lines 1-54; column 35, line 48 – column 36, line 3; column 110, line 60-column 111, line 50; column 126, lines 22-54; column 135, lines 10-26; column 165, lines 35-61). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify YU to have wherein in response to the confidence of the first movable object calculated with the detection range for the second movable object being greater than the confidence of the first movable object calculated with the range circumscribing the first movable object, the detection range determiner determines the detection range for the second movable object as the detection range for the first movable object of EBRAHIMI reference. The Suggestion/motivation for doing so would have been to provide the ability of classifying objects in a field of view as being moveable objects upon detecting a difference of greater than a threshold size related to the confidence score of the pixels belonging to the movable objects as suggested by EBRAHIMI column 40, lines 1-18. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine EBRAHIMI with YU to obtain the invention as specified in claim 8. As per claim 9, YU discloses the information processing apparatus according to claim 1. YU fails to disclose wherein in response to the confidence calculated with the determined detection range for the first movable object being greater than a second threshold, the detection range determiner records the detection range for the first movable object into the recorder. EBRAHIMI discloses wherein in response to the confidence calculated with the determined detection range for the first movable object being greater than a second threshold, the detection range determiner records the detection range for the first movable object into the recorder (the system is adapted to apply a threshold to the pixels of the image and remove all pixels below the thresholding value, the image feature selected for thresholding may include pixel intensity values and will remove background pixels by subtracting them if the intensity value does not meet the threshold acting as the second threshold of a plurality of thresholds which may be selectively applied and adjusted to be applied to any image feature/parameter; figs 37A-37B; column 32, line 17-column 33, line 59). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify YU to have wherein in response to the confidence calculated with the determined detection range for the first movable object being greater than a second threshold, the detection range determiner records the detection range for the first movable object into the recorder of EBRAHIMI reference. The Suggestion/motivation for doing so would have been to provide the ability to use thresholding on a plurality of image features to determine bounding box/object region overlap of two objects in the frame as suggested by column 32, lines 3-45. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine EBRAHIMI with YU to obtain the invention as specified in claim 9. As per claim 10 YU discloses the information processing apparatus according to claim 1. YU fails to disclose wherein in response to the confidence of the first movable object calculated with the detection range for the second movable object being greater than the confidence of the first movable object calculated with the range circumscribing the first movable object and a number of consecutive frames each having a difference greater than a third threshold between the range circumscribing the first movable object and the detection range for the second movable object being less than or equal to a predetermined number, the detection range determiner determines the detection range for the second movable object as the detection range for the first movable object and records the determined detection range for the first movable object into the recorder. EBRAHIMI discloses wherein in response to the confidence of the first movable object calculated with the detection range for the second movable object being greater than the confidence of the first movable object calculated with the range circumscribing the first movable object and a number of consecutive frames each having a difference greater than a third threshold between the range circumscribing the first movable object and the detection range for the second movable object being less than or equal to a predetermined number, the detection range determiner determines the detection range for the second movable object as the detection range for the first movable object and records the determined detection range for the first movable object into the recorder (a third threshold is discussed in relation to the object detecting regions acting as bonding boxes the system is adapted to provide the third threshold as a depth vector threshold for depth vectors of the point cloud pixels used to determine confidence of belonging to the detected object of the image regions within a range of the bounding boxes that includes the first object and additional objects; column 33, line 9 – column 34, line 67; column 38, lines 9-53; column 40, lines 1-10). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify YU to have wherein in response to the confidence of the first movable object calculated with the detection range for the second movable object being greater than the confidence of the first movable object calculated with the range circumscribing the first movable object and a number of consecutive frames each having a difference greater than a third threshold between the range circumscribing the first movable object and the detection range for the second movable object being less than or equal to a predetermined number, the detection range determiner determines the detection range for the second movable object as the detection range for the first movable object and records the determined detection range for the first movable object into the recorder of EBRAHIMI reference. The Suggestion/motivation for doing so would have been to provide the ability to iterate through each of the core depth vectors and create a graph of reachable depth vectors, where nodes on the graph are identified in response to non-core corresponding depth vectors being within a threshold distance of a core depth vector in the graph, and in response to core depth vectors in the graph being reachable by other core depth vectors in the graph, where to depth vectors are reachable from one another if there is a path from one depth vector to the other depth vector where every link and the path is a core depth vector and is it within a threshold distance of one another, and identifies cluster in order to determine the centroid of each cluster in the spatial dimensions of an output depth vector useful for constructing floor plan maps as suggested by column 34, lines 15-65 of EBRAHIMI. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine EBRAHIMI with YU to obtain the invention as specified in claim 10. As per claim 11, YU discloses the information processing apparatus according to claim 1. YU fails to disclose further comprising: an output unit configured to superimpose the detection range for the first movable object recorded in the recorder on the first frame and output the detection range superimposed on the first frame. EBRAHIMI discloses further comprising: an output unit configured to superimpose the detection range for the first movable object recorded in the recorder on the first frame and output the detection range superimposed on the first frame (overlaying, with a processor of the robot, the images captured by the at least two image sensors to produce a superimposed image showing both captured images in a single image which would include the object detection region/bounding box associated with the object; claim 1). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify YU to have an output unit configured to superimpose the detection range of EBRAHIMI reference. The Suggestion/motivation for doing so would have been to provide the ability to overlay/superimpose two images for image comparison purposes as suggested by claim 1 of EBRAHIMI. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine EBRAHIMI with YU to obtain the invention as specified in claim 11. As per claim 12, YU in view of EBRAHIMI discloses the information processing apparatus according to claim 11. YU fails to disclose wherein in response to a confidence calculated with the detection range for the first movable object recorded in the recorder being greater than a second threshold, the output unit outputs the detection range for the first movable object. EBRAHIMI discloses wherein in response to a confidence calculated with the detection range for the first movable object recorded in the recorder being greater than a second threshold, the output unit outputs the detection range for the first movable object (the system is adapted to apply a threshold to the pixels of the image and remove all pixels below the thresholding value, the image feature selected for thresholding may include pixel intensity values and will remove background pixels by subtracting them if the intensity value does not meet the threshold acting as the second threshold of a plurality of thresholds which may be selectively applied and adjusted to be applied to any image feature/parameter; figs 37A-37B; column 32, line 17-column 33, line 59). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify YU to have wherein in response to a confidence calculated with the detection range for the first movable object recorded in the recorder being greater than a second threshold, the output unit outputs the detection range for the first movable object of EBRAHIMI reference. The Suggestion/motivation for doing so would have been to provide the ability to use thresholding on a plurality of image features to determine bounding box/object region overlap of two objects in the frame as suggested by column 32, lines 3-45. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine EBRAHIMI with YU to obtain the invention as specified in claim 12. As per claim 13, YU discloses the information processing apparatus according to claim 11. YU fails to disclose wherein in response to the confidence of the first movable object calculated with the detection range for the second movable object being greater than the confidence of the first movable object calculated with the range circumscribing the first movable object and a number of consecutive frames each having a difference greater than a third threshold between the range circumscribing the first movable object and the detection range for the second movable object being less than or equal to a predetermined number, the output unit outputs the detection range for the first movable object recorded in the recorder. EBRAHIMI discloses wherein in response to the confidence of the first movable object calculated with the detection range for the second movable object being greater than the confidence of the first movable object calculated with the range circumscribing the first movable object and a number of consecutive frames each having a difference greater than a third threshold between the range circumscribing the first movable object and the detection range for the second movable object being less than or equal to a predetermined number, the output unit outputs the detection range for the first movable object recorded in the recorder (a third threshold is discussed in relation to the object detecting regions acting as bonding boxes the system is adapted to provide the third threshold as a depth vector threshold for depth vectors of the point cloud pixels used to determine confidence of belonging to the detected object of the image regions within a range of the bounding boxes that includes the first object and additional objects; column 33, line 9 – column 34, line 67; column 38, lines 9-53; column 40, lines 1-10). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify YU to have wherein in response to the confidence of the first movable object calculated with the detection range for the second movable object being greater than the confidence of the first movable object calculated with the range circumscribing the first movable object and a number of consecutive frames each having a difference greater than a third threshold between the range circumscribing the first movable object and the detection range for the second movable object being less than or equal to a predetermined number, the output unit outputs the detection range for the first movable object recorded in the recorder of EBRAHIMI reference. The Suggestion/motivation for doing so would have been to provide the ability to iterate through each of the core depth vectors and create a graph of reachable depth vectors, where nodes on the graph are identified in response to non-core corresponding depth vectors being within a threshold distance of a core depth vector in the graph, and in response to core depth vectors in the graph being reachable by other core depth vectors in the graph, where to depth vectors are reachable from one another if there is a path from one depth vector to the other depth vector where every link and the path is a core depth vector and is it within a threshold distance of one another, and identifies cluster in order to determine the centroid of each cluster in the spatial dimensions of an output depth vector useful for constructing floor plan maps as suggested by column 34, lines 15-65 of EBRAHIMI. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine EBRAHIMI with YU to obtain the invention as specified in claim 13. As per claim 14, YU discloses the information processing apparatus according to claim 11. YU fails to disclose wherein in response to a number of consecutive frames each having a confidence calculated with the determined detection range for the first movable object being greater than a first threshold being greater than a predetermined number, the output unit outputs the detection range for the first movable object. EBRAHIMI discloses wherein in response to a number of consecutive frames each having a confidence calculated with the determined detection range for the first movable object being greater than a first threshold being greater than a predetermined number, the output unit outputs the detection range for the first movable object (where detection ranges overlap a confidence score is provided to be determined for overlap determinations, such as, based on an amount of overlap and aggregate amount of disagreement between depth vectors in the area of overlap in the different fields of view, and the Bayesian techniques down-weight updates to priors based on decreases in the amount of confidence and includes an area of overlap is identified, as a bounding box of pixel positions and a threshold angle of a vertical plane at which overlap starts in each field of view, the system constructs a larger field of view by combining the two fields of view using the overlapping depth measurements as attachment point into a shared coordinate system of a shared origin, values of weights / thresholds are determined based on various factors, such as the degree of similarity between depth measurements recorded from separate fields of view, the quality of the measurements, the weight of neighboring depth measurements, or the number of neighboring depth measurements with high weight; column 35, line 51- column 36, line 44). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify YU to have wherein in response to a number of consecutive frames each having a confidence calculated with the determined detection range for the first movable object being greater than a first threshold being greater than a predetermined number, the output unit outputs the detection range for the first movable object of EBRAHIMI reference. The Suggestion/motivation for doing so would have been to provide weighted adjustable parameters and thresholds in order to be able to adjust the system parameters and thresholding limits based on the specific scenario being observed in the images in order to provide optimal parameter settings for the highest confidence/similarity scores in object detection as suggested at column 36, lines 4-45 of EBRAHIMI. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine EBRAHIMI with YU to obtain the invention as specified in claim 14. As per claim 15, YU discloses the information processing apparatus according to claim 1. YU fails to disclose further comprising: a corrector configured to correct the detection range for the second movable object based on a change in position and size from the detection range for the second movable object to a detection range for a movable object determined to be a same object as the first movable object in a frame preceding the second frame. EBRAHIMI discloses further comprising: a corrector configured to correct the detection range for the second movable object based on a change in position and size from the detection range for the second movable object to a detection range for a movable object determined to be a same object as the first movable object in a frame preceding the second frame (the system is adapted to perform the object detection methods on multiple objects in the same frame including both a first and second object wherein the detectors of the system detect a first and a second object present in the FOY of the sensor, each of which is positioned at a different distance, may produce a different phase shift that may be associated with their respective distances and bounding boxes/ranges; column 12, lines 10-48; column 23, lines 1-54; column 35, line 48 – column 36, line 3; column 110, line 60-column 111, line 50; column 126, lines 22-54; column 135, lines 10-26; column 165, lines 35-61). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify YU to have a corrector configured to correct the detection range for the second movable object based on a change in position and size from the detection range for the second movable object to a detection range for a movable object determined to be a same object as the first movable object in a frame preceding the second frame of EBRAHIMI reference. The Suggestion/motivation for doing so would have been to provide the ability of classifying objects in a field of view as being moveable objects upon detecting a difference of greater than a threshold size related to the confidence score of the pixels belonging to the movable objects as suggested by EBRAHIMI column 40, lines 1-18. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine EBRAHIMI with YU to obtain the invention as specified in claim 15. Claim 16 is rejected under 35 § U.S.C. 103 as being obvious over US 2021/0271923 A1 to YU et al (hereinafter “YU”) in view of US 2021/0103776 A1 to JIANG et al (hereinafter “JIANG”). As per claim 16, YU discloses the information processing apparatus according to claim 1. YU fails to disclose wherein the detector detects the movable object by at least one of interframe subtraction or background subtraction. JIANG discloses wherein the detector detects the movable object by at least one of interframe subtraction or background subtraction (a computing system including an object detector trained on 3D models 203A-203M are processed in a salient attention learning 203 stage and training pose generation 208 stage to train a 3D assisted object detection network 214 via a 3D assisted object detection network training 212 stage to perform object detection using segmentation methods known in the art such as background subtraction; figs 1-3; paragraph [0062-0063]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify YU to have wherein the detector detects the movable object by at least one of interframe subtraction or background subtraction of JIANG reference. The Suggestion/motivation for doing so would have been to provide the ability to remove objects using background subtraction segmentation methods which would be ideally combined with the object patching technology of YU providing a segmented image missing the segmented object as suggested at paragraph [0063] of JIANG. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine JIANG with YU to obtain the invention as specified in claim 16. Claim 17 is rejected under 35 § U.S.C. 103 as being obvious over US 2021/0271923 A1 to YU et al (hereinafter “YU”) in view of US 2019/0286893 A1 to ADACHI (hereinafter “ADACHI”. As per claim 17, YU discloses the information processing apparatus according to claim 1. YU fails to disclose wherein the calculator calculates the confidence of the detected movable object being the predetermined object by using a discriminator based on at least one of a neural network, boosting, or a support vector machine. ADACHI discloses wherein the calculator calculates the confidence of the detected movable object being the predetermined object by using a discriminator based on at least one of a neural network, boosting, or a support vector machine (AdaBoost otherwise known as adaptive boosting is used on many weak discriminators to apply boosting this improves discrimination accuracy and are connected in series to form a cascade detector; fig 8; paragraphs [0074-0078], [0105]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify YU to have calculator calculates the confidence of the detected movable object being the predetermined object by using a discriminator of ADACHI reference. The Suggestion/motivation for doing so would have been to provide adaptive boosting to the discriminators in order to form a cascade detector as suggested by paragraph [0076] of ADACHI. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine ADACHI with YU to obtain the invention as specified in claim 17. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. These prior arts include the following: US 2021/0158562 A1 Any inquiry concerning this communication or earlier communications from the examiner should be directed to DEVIN JACOB DHOOGE whose telephone number is (571) 270-0999. The examiner can normally be reached 7:30-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached on (571) 270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800- 786-9199 (IN USA OR CANADA) or 571-272-1000. /Devin Dhooge/ USPTO Patent Examiner Art Unit 2677 /ANDREW W BEE/Supervisory Patent Examiner, Art Unit 2677
Read full office action

Prosecution Timeline

Jun 28, 2023
Application Filed
Sep 25, 2025
Non-Final Rejection — §102, §103
Dec 23, 2025
Response Filed
Mar 17, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602773
Deep-Learning-based T1-Enhanced Selection of Linear Coefficients (DL-TESLA) for PET/MR Attenuation Correction
2y 5m to grant Granted Apr 14, 2026
Patent 12579780
HYPERSPECTRAL TARGET DETECTION METHOD OF BINARY-CLASSIFICATION ENCODER NETWORK BASED ON MOMENTUM UPDATE
2y 5m to grant Granted Mar 17, 2026
Patent 12524982
NON-TRANSITORY COMPUTER READABLE RECORDING MEDIUM, VISUALIZATION METHOD AND INFORMATION PROCESSING APPARATUS
2y 5m to grant Granted Jan 13, 2026
Patent 12517146
IMAGE-BASED DECK VERIFICATION
2y 5m to grant Granted Jan 06, 2026
Patent 12505673
MULTIMODAL GAME VIDEO SUMMARIZATION WITH METADATA
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

2-3
Expected OA Rounds
70%
Grant Probability
99%
With Interview (+42.9%)
3y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 71 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month