Prosecution Insights
Last updated: April 19, 2026
Application No. 18/466,794

Solving an Error Related to an Object Captured in a Sensed Information Unit (SIU),

Final Rejection §103
Filed
Sep 13, 2023
Examiner
CAMERON, ATTICUS A
Art Unit
3658
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Autobrains Technologies Ltd.
OA Round
2 (Final)
84%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
96%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
49 granted / 58 resolved
+32.5% vs TC avg
Moderate +11% lift
Without
With
+11.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
58 currently pending
Career history
116
Total Applications
across all art units

Statute-Specific Performance

§101
13.6%
-26.4% vs TC avg
§103
48.0%
+8.0% vs TC avg
§102
30.8%
-9.2% vs TC avg
§112
5.9%
-34.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 58 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Response to Amendment Claims 1, 9, and 10 have been amended. Claim 18 has been cancelled. The 35 U.S.C. 102(a)(1) rejection has been updated with a 35 U.S.C. 103 rejection as a result of amendment. Response to Arguments Applicant's arguments filed 08/15/2025 have been fully considered but they are not persuasive. Applicant first contends that the compression method described in Rai is not related to any error resolution. Examiner respectfully disagrees. As disclosed in at least Rai 0321-0324, which outlines how the compression is used to improve accuracy [0321] Compression of the shape information of cluster signatures may be based on a priority of the cluster signature, a popularity of matches to the cluster signatures, and the like. [0322] The shape information related to an input image that matches one or more of the cluster structures may be calculated based on shape information related to matching signatures. [0323] For example—a shape information regarding a certain identifier within the signature of the input image may be determined based on shape information related to the certain identifiers within the matching signatures. [0324] Any operation on the shape information related to the certain identifiers within the matching signatures may be applied in order to determine the (higher accuracy) shape information of a region of interest of the input image identified by the certain identifier. Applicant then contends that Rai’s signatures are related to signatures of objects, and not signatures of entire type of objects, which is claimed in claim 1. Examiner respectfully disagrees, and contends that Applicant is overstating the claimed language “the cluster is represented by the cluster signature”, which remains disclosed by the BRI of the sections of Rai presented in the rejection and presented again below. Applicant finally contends that Rai does not disclose “when determined that the compressed version of the cluster signature does not resolve the accuracy, then adapting a sensing unit to solve the error.” Examiner respectfully disagrees, and points to at least 0827-0830 of Rai, which describes an error sensing and resolution unit that determines and resolves an error in the signature. Step 2712 of detecting that a certain signature of an object causes a false detection (see, for example FIGS. 39 and 40). The certain signature belongs to a certain concept structure that may include multiple signatures. The false detection may include determining that the object may be represented by the certain concept structure while the object may be of a certain type that may be not related to the certain concept structure. For example—a concept of a pedestrian may classify (by error) a mail box as a pedestrian. [0828] Step 2714 of searching for an error inducing part (see, for example FIGS. 41-43) of the certain signature that induced the false detection. [0829] Step 2715 of determining whether to remove the error inducing part of the certain signature. This may involve calculating a cost related to a removing the error inducing part from the concept structure—and removing the error inducing part when the cost may be within a predefined range. [0830] Step 2726 of removing (see, for example FIG. 44) from the concept structure the error inducing part to provide an updated concept structure. Examiner acknowledges Applicant’s arguments with respect to the amended language overcoming the current rejection and finds them moot with consideration to the updated 35 U.S.C. 103 rejection presented below. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-18 are rejected under 35 U.S.C. 103 as being unpatentable over Raichelgauz et al. (US20210053573, referred to as Raichelgauz) in view of Yang (CN116152612A, referred to as Yang). Regarding claim 1: Raichelgauz discloses: A method that is computer implemented and is for solving an error related to an object captured in a sensed information unit (SIU), the method comprises: applying an error resolving process [that is configured to support head classes of a long tail statistical distribution] by obtaining a cluster signature that is identified as introducing an error in relation to an object associated with a cluster, the cluster is represented by the cluster signature, the cluster signature is for at least partially automatically driving a vehicle; ([0231] Any of the mentioned above signature generation method provides a signature that does not explicitly includes accurate shape information. This adds to the robustness of the signature to shape related inaccuracies or to other shape related parameters. [0232] The signature includes identifiers for identifying media regions of interest. [0233] Each media region of interest may represent an object (for example a vehicle, a pedestrian, a road element, a human made structure, wearables, shoes, a natural element such as a tree, the sky, the sun, and the like) or a part of an object (for example—in the case of the pedestrian—a neck, a head, an arm, a leg, a thigh, a hip, a foot, an upper arm, a forearm, a wrist, and a hand). It should be noted that for object detection purposes a part of an object may be regarded as an object. [0507] Obstacle 40, e.g., a pothole, may be located on roadway 20. In accordance with embodiments described herein, sensor 130 may also be operative to capture images of obstacle 40. The autonomous driving system may be operative to detect the presence of obstacle 40 in the images provided by sensor 130 and to determine an appropriate response, e.g., whether or not vehicle 100 should change speed and/or direction to avoid or minimize the impact with obstacle 40. For example, if the autonomous driving system determines that obstacle 40 is a piece of paper on roadway 20, no further action may be necessary. However, if obstacle 40 is a pothole, the autonomous driving system may instruct vehicle 100 to slow down and/or swerve to avoid obstacle 40.) obtaining a compressed version of the cluster signature; ([0316] Object detection may include comparing a signature of an input image to signatures of one or more cluster structures in order to find one or more cluster structures that include one or more matching signatures that match the signature of the input image. [0317] The number of input images that are compared to the cluster structures may well exceed the number of signatures of the cluster structures. For example—thousands, tens of thousands, hundreds of thousands (and even more) of input signature may be compared to much less cluster structure signatures. The ratio between the number of input images to the aggregate number of signatures of all the cluster structures may exceed ten, one hundred, one thousand, and the like. [0318] In order to save computational resources, the shape information of the input images may be compressed. [0319] On the other hand—the shape information of signatures that belong to the cluster structures may be uncompressed—and of higher accuracy than those of the compressed shape information. [0320] When the higher quality is not required—the shape information of the cluster signature may also be compressed. [0321] Compression of the shape information of cluster signatures may be based on a priority of the cluster signature, a popularity of matches to the cluster signatures, and the like. [0322] The shape information related to an input image that matches one or more of the cluster structures may be calculated based on shape information related to matching signatures. [0323] For example—a shape information regarding a certain identifier within the signature of the input image may be determined based on shape information related to the certain identifiers within the matching signatures. [0324] Any operation on the shape information related to the certain identifiers within the matching signatures may be applied in order to determine the (higher accuracy) shape information of a region of interest of the input image identified by the certain identifier. [0325] For example—the shapes may be virtually overlaid on each other and the population per pixel may define the shape. [0326] For example—only pixels that appear in at least a majority of the overlaid shaped should be regarded as belonging to the region of interest. [0327] Other operations may include smoothing the overlaid shapes, selecting pixels that appear in all overlaid shapes. [0328] The compressed shape information may be ignored of or be considered.) determining whether the compressed version of the cluster signature resolves the error; when determined that the compressed version of the cluster signature resolves the error, automatically replacing the signature by the compressed version of the cluster signature; when determined that the compressed version of the cluster signature does not resolve the accuracy, then triggering a generation of another error resolving process that comprises adding another error resoling unit that has signature generating capabilities, and is configured to solve the error, [the error is associated with a long tail class of the long tail statistical distribution;] and generating, once the other error resolving unit is added and configured to solve the error, a corresponding error resolving rule that maps to the other error resolving unit the sensed information unit having a signature that is associated with the error and is generated by the error resolving unit. ([0827] Step 2712 of detecting that a certain signature of an object causes a false detection (see, for example FIGS. 39 and 40). The certain signature belongs to a certain concept structure that may include multiple signatures. The false detection may include determining that the object may be represented by the certain concept structure while the object may be of a certain type that may be not related to the certain concept structure. For example—a concept of a pedestrian may classify (by error) a mail box as a pedestrian. [0828] Step 2714 of searching for an error inducing part (see, for example FIGS. 41-43) of the certain signature that induced the false detection. [0829] Step 2715 of determining whether to remove the error inducing part of the certain signature. This may involve calculating a cost related to a removing the error inducing part from the concept structure—and removing the error inducing part when the cost may be within a predefined range. [0830] Step 2726 of removing (see, for example FIG. 44) from the concept structure the error inducing part to provide an updated concept structure. [0831] Each signature may represent a map of firing neurons of a neural network. [0835] Comparing the matching first signatures and matching second signatures to find parts that causes false errors and parts that result in positive detection. [0836] Defining the error inducing part of the certain signature based on an overlap between the matching parts of the certain signature. [0837] The updated concept may be shared between vehicles.) Raichelgauz does not explicitly teach: [that is configured to support head classes of a long tail statistical distribution] [the error is associated with a long tail class of the long tail statistical distribution;] Raichelgauz does not explicitly teach the following limitations, however Yang, from an analogous field of endeavor, teaches: that is configured to support head classes of a long tail statistical distribution; the error is associated with a long tail class of the long tail statistical distribution; ([pg. 5-6, lines 35-9] the target category may be the category of the target item in the image to be recognized. For example, the image to be recognized is an image carrying a kitten, and the target category is the category corresponding to the kitten in the image to be recognized, that is, a cat. The long-tail image recognition model is a trained deep learning model, wherein the long-tail image recognition model is trained based on the long-tail data set. That is to say, the preset training image set corresponding to the long-tail image recognition model includes several target images. The ratio of the number of images in the preset image set to the total number of images in the preset image set is smaller than a preset ratio threshold. In other words, the head class in the preset training image set corresponds to most of the target images in the preset training image set, while the tail class corresponds to a small part of the target images in the preset training image set. In an implementation manner, the training process of the long-tail image recognition model specifically includes: H10, determine some feature vector groups of the training image pair by some expert network models; H20. Determine the comparative learning loss item of the training image pair based on the respective feature vector groups corresponding to each expert network model, and determine the training based on the respective basic feature vectors corresponding to each expert network model and the label category of the training image pair. Classification loss term for image pairs; H30. Determine the distillation loss item based on the corresponding basic feature vectors of each expert network model; H40. Train the expert network model based on the comparative learning loss item, the classification loss item, and the distillation loss item to obtain a trained expert network model; H50. Determine the long-tail image recognition model based on the trained expert network model.) Raichelgauz and Yang are analogous art to the claimed invention since they are from the similar field of image processing. It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the compression error and signature error resolving processing of Raichelgauz to include the specific statistical distribution used in the error sensing process taught in Yang. The motivation for modification would have been to provide the image error resolution processs of Raichelgauz with a specific statistical distribution taught in Yang and therefore known in the art as a useful basis for error resolution in vehicle image processing. Regarding claim 2: The combination of Raichelgauz and Yang teaches: The method according to claim 1 Raichelgauz further discloses: wherein the cluster signature is calculated based on object signatures of the cluster, wherein the object signatures are generated by a signature generator that was fed with readout information provided by a readout circuit, the readout information was extracted from a deep neural network (DNN). ([0166] In FIG. 1E one or more initial iterations are executed by first and second CNN layers 6010(1) and 6010(2) that apply first and second functions 6015(1) and 6015(2). [0167] The output of these layers provided information about image properties. The image properties may not amount to object detection. Image properties may include location of edges, properties of curves, and the like. [0168] The CNN may include additional layers (for example third till N'th layer 6010(N)) that may provide a CNN output 6018 that may include object detection information. It should be noted that the additional layers may not be included. [0169] It should be noted that executing the entire signature generation process by a hardware CNN of fixed connectivity may have a higher power consumption—as the CNN will not be able to reduce the power consumption of irrelevant nodes. [0170] FIG. 1F illustrates an input image 6001, and a single iteration of an expansion operation and a merge operation. [0171] In FIG. 1F the input image 6001 undergoes two expansion operations. [0172] The first expansion operation involves filtering the input image by a first filtering operation 6031 to provide first regions of interest (denoted 1) in a first filtered image 6031′. [0173] The first expansion operation also involves filtering the input image by a second filtering operation 6032 to provide first regions of interest (denoted 2) in a second filtered image 6032′, [0174] The merge operation includes merging the two images by overlaying the first filtered image on the second filtered image to provide regions of interest 1, 2, 12 and 21. Region of interest 12 is an overlap area shared by a certain region of interest 1 and a certain region of interest 2. Region of interest 21 is a union of another region of interest 1 and another region of interest 2.) Regarding claim 3: The combination of Raichelgauz and Yang teaches: The method according to claim 1, Raichelgauz further discloses: comprising identifying the cluster signature as introducing the error. ([0827] Step 2712 of detecting that a certain signature of an object causes a false detection (see, for example FIGS. 39 and 40). The certain signature belongs to a certain concept structure that may include multiple signatures. The false detection may include determining that the object may be represented by the certain concept structure while the object may be of a certain type that may be not related to the certain concept structure. For example—a concept of a pedestrian may classify (by error) a mail box as a pedestrian. [0828] Step 2714 of searching for an error inducing part (see, for example FIGS. 41-43) of the certain signature that induced the false detection. [0829] Step 2715 of determining whether to remove the error inducing part of the certain signature. This may involve calculating a cost related to a removing the error inducing part from the concept structure—and removing the error inducing part when the cost may be within a predefined range. [0830] Step 2726 of removing (see, for example FIG. 44) from the concept structure the error inducing part to provide an updated concept structure. [0831] Each signature may represent a map of firing neurons of a neural network.) Regarding claim 4: The combination of Raichelgauz and Yang teaches: The method according to claim 1, Raichelgauz further discloses: wherein the cluster signature comprises cluster signature elements, wherein the compressing comprises reducing a number of cluster signature elements. ([0316] Object detection may include comparing a signature of an input image to signatures of one or more cluster structures in order to find one or more cluster structures that include one or more matching signatures that match the signature of the input image. [0317] The number of input images that are compared to the cluster structures may well exceed the number of signatures of the cluster structures. For example—thousands, tens of thousands, hundreds of thousands (and even more) of input signature may be compared to much less cluster structure signatures. The ratio between the number of input images to the aggregate number of signatures of all the cluster structures may exceed ten, one hundred, one thousand, and the like. [0318] In order to save computational resources, the shape information of the input images may be compressed. [0319] On the other hand—the shape information of signatures that belong to the cluster structures may be uncompressed—and of higher accuracy than those of the compressed shape information. [0320] When the higher quality is not required—the shape information of the cluster signature may also be compressed. [0321] Compression of the shape information of cluster signatures may be based on a priority of the cluster signature, a popularity of matches to the cluster signatures, and the like. [0322] The shape information related to an input image that matches one or more of the cluster structures may be calculated based on shape information related to matching signatures. [0323] For example—a shape information regarding a certain identifier within the signature of the input image may be determined based on shape information related to the certain identifiers within the matching signatures. [0324] Any operation on the shape information related to the certain identifiers within the matching signatures may be applied in order to determine the (higher accuracy) shape information of a region of interest of the input image identified by the certain identifier. [0325] For example—the shapes may be virtually overlaid on each other and the population per pixel may define the shape. [0326] For example—only pixels that appear in at least a majority of the overlaid shaped should be regarded as belonging to the region of interest. [0327] Other operations may include smoothing the overlaid shapes, selecting pixels that appear in all overlaid shapes. [0328] The compressed shape information may be ignored of or be considered.) Regarding claim 5: The combination of Raichelgauz and Yang teaches: The method according to claim 4, Raichelgauz further discloses: wherein the cluster signature elements are indexes for retrieving values that are intermediate results of a signature generation process. ([0316] Object detection may include comparing a signature of an input image to signatures of one or more cluster structures in order to find one or more cluster structures that include one or more matching signatures that match the signature of the input image. [0317] The number of input images that are compared to the cluster structures may well exceed the number of signatures of the cluster structures. For example—thousands, tens of thousands, hundreds of thousands (and even more) of input signature may be compared to much less cluster structure signatures. The ratio between the number of input images to the aggregate number of signatures of all the cluster structures may exceed ten, one hundred, one thousand, and the like. [0318] In order to save computational resources, the shape information of the input images may be compressed. [0319] On the other hand—the shape information of signatures that belong to the cluster structures may be uncompressed—and of higher accuracy than those of the compressed shape information. [0320] When the higher quality is not required—the shape information of the cluster signature may also be compressed. [0321] Compression of the shape information of cluster signatures may be based on a priority of the cluster signature, a popularity of matches to the cluster signatures, and the like. [0322] The shape information related to an input image that matches one or more of the cluster structures may be calculated based on shape information related to matching signatures. [0323] For example—a shape information regarding a certain identifier within the signature of the input image may be determined based on shape information related to the certain identifiers within the matching signatures. [0324] Any operation on the shape information related to the certain identifiers within the matching signatures may be applied in order to determine the (higher accuracy) shape information of a region of interest of the input image identified by the certain identifier. [0325] For example—the shapes may be virtually overlaid on each other and the population per pixel may define the shape. [0326] For example—only pixels that appear in at least a majority of the overlaid shaped should be regarded as belonging to the region of interest. [0327] Other operations may include smoothing the overlaid shapes, selecting pixels that appear in all overlaid shapes. [0328] The compressed shape information may be ignored of or be considered.) Regarding claim 6: The combination of Raichelgauz and Yang teaches: The method according to claim 1 Raichelgauz further discloses: wherein the cluster signature comprises cluster signature elements, wherein the compressing comprises reducing a number of non-zero cluster signature elements ([0316] Object detection may include comparing a signature of an input image to signatures of one or more cluster structures in order to find one or more cluster structures that include one or more matching signatures that match the signature of the input image. [0317] The number of input images that are compared to the cluster structures may well exceed the number of signatures of the cluster structures. For example—thousands, tens of thousands, hundreds of thousands (and even more) of input signature may be compared to much less cluster structure signatures. The ratio between the number of input images to the aggregate number of signatures of all the cluster structures may exceed ten, one hundred, one thousand, and the like. [0318] In order to save computational resources, the shape information of the input images may be compressed. [0319] On the other hand—the shape information of signatures that belong to the cluster structures may be uncompressed—and of higher accuracy than those of the compressed shape information. [0320] When the higher quality is not required—the shape information of the cluster signature may also be compressed. [0321] Compression of the shape information of cluster signatures may be based on a priority of the cluster signature, a popularity of matches to the cluster signatures, and the like. [0322] The shape information related to an input image that matches one or more of the cluster structures may be calculated based on shape information related to matching signatures. [0323] For example—a shape information regarding a certain identifier within the signature of the input image may be determined based on shape information related to the certain identifiers within the matching signatures. [0324] Any operation on the shape information related to the certain identifiers within the matching signatures may be applied in order to determine the (higher accuracy) shape information of a region of interest of the input image identified by the certain identifier. [0325] For example—the shapes may be virtually overlaid on each other and the population per pixel may define the shape. [0326] For example—only pixels that appear in at least a majority of the overlaid shaped should be regarded as belonging to the region of interest. [0327] Other operations may include smoothing the overlaid shapes, selecting pixels that appear in all overlaid shapes. [0328] The compressed shape information may be ignored of or be considered.) Regarding claim 7: The combination of Raichelgauz and Yang teaches: The method according to claim 1, Raichelgauz further discloses: comprising applying the error resolving process that differs from the compressing of the cluster signature. ([0316] Object detection may include comparing a signature of an input image to signatures of one or more cluster structures in order to find one or more cluster structures that include one or more matching signatures that match the signature of the input image. [0317] The number of input images that are compared to the cluster structures may well exceed the number of signatures of the cluster structures. For example—thousands, tens of thousands, hundreds of thousands (and even more) of input signature may be compared to much less cluster structure signatures. The ratio between the number of input images to the aggregate number of signatures of all the cluster structures may exceed ten, one hundred, one thousand, and the like. [0318] In order to save computational resources, the shape information of the input images may be compressed. [0319] On the other hand—the shape information of signatures that belong to the cluster structures may be uncompressed—and of higher accuracy than those of the compressed shape information. [0320] When the higher quality is not required—the shape information of the cluster signature may also be compressed. [0321] Compression of the shape information of cluster signatures may be based on a priority of the cluster signature, a popularity of matches to the cluster signatures, and the like. [0322] The shape information related to an input image that matches one or more of the cluster structures may be calculated based on shape information related to matching signatures. [0323] For example—a shape information regarding a certain identifier within the signature of the input image may be determined based on shape information related to the certain identifiers within the matching signatures. [0324] Any operation on the shape information related to the certain identifiers within the matching signatures may be applied in order to determine the (higher accuracy) shape information of a region of interest of the input image identified by the certain identifier. [0325] For example—the shapes may be virtually overlaid on each other and the population per pixel may define the shape. [0326] For example—only pixels that appear in at least a majority of the overlaid shaped should be regarded as belonging to the region of interest. [0327] Other operations may include smoothing the overlaid shapes, selecting pixels that appear in all overlaid shapes. [0328] The compressed shape information may be ignored of or be considered. [0827] Step 2712 of detecting that a certain signature of an object causes a false detection (see, for example FIGS. 39 and 40). The certain signature belongs to a certain concept structure that may include multiple signatures. The false detection may include determining that the object may be represented by the certain concept structure while the object may be of a certain type that may be not related to the certain concept structure. For example—a concept of a pedestrian may classify (by error) a mail box as a pedestrian. [0828] Step 2714 of searching for an error inducing part (see, for example FIGS. 41-43) of the certain signature that induced the false detection. [0829] Step 2715 of determining whether to remove the error inducing part of the certain signature. This may involve calculating a cost related to a removing the error inducing part from the concept structure—and removing the error inducing part when the cost may be within a predefined range. [0830] Step 2726 of removing (see, for example FIG. 44) from the concept structure the error inducing part to provide an updated concept structure. [0831] Each signature may represent a map of firing neurons of a neural network.) Regarding claim 8: The combination of Raichelgauz and Yang teaches: The method according to claim 1, Raichelgauz further discloses: wherein the cluster signature was generated by an object detection process, wherein the triggering of the error resolving process comprises triggering a generation of another object detection process for managing a detection of the object. ([0316] Object detection may include comparing a signature of an input image to signatures of one or more cluster structures in order to find one or more cluster structures that include one or more matching signatures that match the signature of the input image. [0317] The number of input images that are compared to the cluster structures may well exceed the number of signatures of the cluster structures. For example—thousands, tens of thousands, hundreds of thousands (and even more) of input signature may be compared to much less cluster structure signatures. The ratio between the number of input images to the aggregate number of signatures of all the cluster structures may exceed ten, one hundred, one thousand, and the like. [0318] In order to save computational resources, the shape information of the input images may be compressed. [0319] On the other hand—the shape information of signatures that belong to the cluster structures may be uncompressed—and of higher accuracy than those of the compressed shape information. [0320] When the higher quality is not required—the shape information of the cluster signature may also be compressed. [0321] Compression of the shape information of cluster signatures may be based on a priority of the cluster signature, a popularity of matches to the cluster signatures, and the like. [0322] The shape information related to an input image that matches one or more of the cluster structures may be calculated based on shape information related to matching signatures. [0323] For example—a shape information regarding a certain identifier within the signature of the input image may be determined based on shape information related to the certain identifiers within the matching signatures. [0324] Any operation on the shape information related to the certain identifiers within the matching signatures may be applied in order to determine the (higher accuracy) shape information of a region of interest of the input image identified by the certain identifier. [0325] For example—the shapes may be virtually overlaid on each other and the population per pixel may define the shape. [0326] For example—only pixels that appear in at least a majority of the overlaid shaped should be regarded as belonging to the region of interest. [0327] Other operations may include smoothing the overlaid shapes, selecting pixels that appear in all overlaid shapes. [0328] The compressed shape information may be ignored of or be considered. [0827] Step 2712 of detecting that a certain signature of an object causes a false detection (see, for example FIGS. 39 and 40). The certain signature belongs to a certain concept structure that may include multiple signatures. The false detection may include determining that the object may be represented by the certain concept structure while the object may be of a certain type that may be not related to the certain concept structure. For example—a concept of a pedestrian may classify (by error) a mail box as a pedestrian. [0828] Step 2714 of searching for an error inducing part (see, for example FIGS. 41-43) of the certain signature that induced the false detection. [0829] Step 2715 of determining whether to remove the error inducing part of the certain signature. This may involve calculating a cost related to a removing the error inducing part from the concept structure—and removing the error inducing part when the cost may be within a predefined range. [0830] Step 2726 of removing (see, for example FIG. 44) from the concept structure the error inducing part to provide an updated concept structure. [0831] Each signature may represent a map of firing neurons of a neural network.) Regarding claim 9: The combination of Raichelgauz and Yang teaches: The method according to claim 1, Raichelgauz further discloses: wherein the triggering of the error resolving process comprises triggering an addition of an error resolving portion of an adaptable artificial intelligence (AI) system. ([0874] step 3024 may include calculating one or more second signatures of the second sensed information; and searching, out of the concept structures related to the situation, for at least one matching concept structure that matches the one or more second signatures, and determining at least an identity of a detected object based on metadata of the at least one matching concept structure. [0875] Step 3024 may include, for example, object detection, object behavior estimation, and the like. Step 3024 may include, for example applying any type of machine learning algorithms, not necessarily signatures. e.g. Deep Learning where models are loaded, and input is a direct “pixel information” of the sensor. Step 3024 may be executed without signature calculation.) Regarding claim 10: Rejected using the same rationale as claim 1, however additionally directed to “A non-transitory computer readable medium storing instructions”, which is further disclosed by Raichelgauz: A non-transitory computer readable medium storing instructions ([0009] There may be provided a non-transitory computer readable medium for detecting an improperly driven vehicle, the non-transitory computer readable medium may store instructions for detecting, based on information sensed by at least one sensor of a monitoring vehicle, a behavior of a monitored vehicle; determining whether the behavior of the monitored vehicle may be an improper behavior; generating, by a processing unit of the monitoring vehicle, an improper vehicle label that may include a unique vehicle identifier that identifies the monitored vehicle, when the behavior of the monitored vehicle may be determined to be improper; and sending the improper vehicle label to at least one shared database accessible to other vehicles.) Regarding claim 11: Rejected using the same rationale as claim 2. Regarding claim 12: Rejected using the same rationale as claim 3. Regarding claim 13: Rejected using the same rationale as claim 4. Regarding claim 14: Rejected using the same rationale as claim 5. Regarding claim 15: Rejected using the same rationale as claim 6. Regarding claim 16: Rejected using the same rationale as claim 7. Regarding claim 17: Rejected using the same rationale as claim 8. Regarding claim 18: Rejected using the same rationale as claim 9. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ATTICUS A CAMERON whose telephone number is 703-756-4535. The examiner can normally be reached M-F 8:30 am - 4:30 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Thomas Worden can be reached on 571-272-4876. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ATTICUS A CAMERON/ Examiner, Art Unit 3658A /JASON HOLLOWAY/Primary Examiner, Art Unit 3658
Read full office action

Prosecution Timeline

Sep 13, 2023
Application Filed
May 15, 2025
Non-Final Rejection — §103
Aug 05, 2025
Interview Requested
Aug 13, 2025
Applicant Interview (Telephonic)
Aug 14, 2025
Examiner Interview Summary
Aug 15, 2025
Response Filed
Nov 13, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12583445
VEHICLE CONTROLLER, METHOD, AND COMPUTER PROGRAM FOR VEHICLE CONTROL
2y 5m to grant Granted Mar 24, 2026
Patent 12586473
SYSTEM AND METHOD TO BUILD A FLYABLE HOLDING PATTERN ENTRY TRAJECTORY WHEN THE AVAILABLE SPACE IS LIMITED
2y 5m to grant Granted Mar 24, 2026
Patent 12544937
ROBOTIC HAND SYSTEM AND METHOD FOR CONTROLLING ROBOTIC HAND
2y 5m to grant Granted Feb 10, 2026
Patent 12528448
HYBRID ELECTRIC VEHICLE ENERGY MANAGEMENT DURING EXTREME OPERATING CONDITIONS
2y 5m to grant Granted Jan 20, 2026
Patent 12521883
SAFETY SYSTEM FOR INTEGRATED HUMAN/ROBOTIC ENVIRONMENTS
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
84%
Grant Probability
96%
With Interview (+11.4%)
2y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 58 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month