Prosecution Insights
Last updated: April 19, 2026
Application No. 18/507,185

IMAGE PROCESSING APPARATUS AND METHOD

Non-Final OA §103
Filed
Nov 13, 2023
Examiner
FUJITA, KATRINA R
Art Unit
2672
Tech Center
2600 — Communications
Assignee
Eficar Inc.
OA Round
1 (Non-Final)
70%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
94%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
472 granted / 674 resolved
+8.0% vs TC avg
Strong +24% interview lift
Without
With
+24.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
25 currently pending
Career history
699
Total Applications
across all art units

Statute-Specific Performance

§101
11.3%
-28.7% vs TC avg
§103
55.7%
+15.7% vs TC avg
§102
15.3%
-24.7% vs TC avg
§112
11.8%
-28.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 674 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 6, 9-11, 16 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Malreddy et al. (US 2021/0342997) and Hida (US 2021/0374928). Regarding claim 1, Malreddy et al. discloses an image processing apparatus, comprising: a memory storing computer-executable code (“FIG. 37 is a diagram showing hardware and software components of a computer system 400 on which the system of the present disclosure can be implemented. The computer system 400 can include a storage device 404, computer vision software code 406” at paragraph 0085, line 1); and at least one processor configured to access the memory and execute the code (“The CPU 412 could include any suitable single-core or multiple-core microprocessor of any suitable architecture that is capable of implementing and running the computer vision software code 406 (e.g., Intel processor)” at paragraph 0086, second to last sentence), wherein the code comprises instructions for the at least one processor to generate an input image, based on detecting the part from the image (“The vehicle component segmentation can be classified into six classes including a vehicle left front door, a vehicle right front door, a vehicle left front fender, a vehicle right front fender, a vehicle hood and a background” at paragraph 0073, line 7), apply the input image to an image analysis model to obtain a defect type and a defect size of the part detected from the image (“Damage severity classification can be classified for each vehicle component segmentation class according to one of undamaged, mildly damaged and extremely damaged by cropping each vehicle component along with its corresponding context from the obtained segmentation” at paragraph 0073, last sentence; “For example, the system 10 can determine whether the location of the damage includes at least one of a front of the vehicle (e.g., a hood or windshield) in step 80, a rear of the vehicle (e.g., a bumper and trunk) in step 82 and/or a side of the vehicle (e.g., a passenger door) in step 84. In step 86, the system 10 determines a severity classification of the damage sustained by the detected vehicle in the received image. For example, the system 10 can determine whether the sustained damage is minor in step 88, moderate in step 90 or severe in step 92” at paragraph 0057, line 9), and apply the defect type and the defect size to the input image to generate an output image (the output is a labeled image where the defect pixels are identified according to severity, which reflects size and type). Malreddy et al. does not explicitly disclose generating an input image by preprocessing an image around a part. Hida teaches an image processing apparatus in the same field of endeavor of image based defect detection, comprising: a memory storing computer-executable code (“The memory 994 may include a computer readable medium, which term may refer to a single medium or multiple media (e.g., a centralized or distributed database and/or associated caches and servers) configured to carry computer-executable instructions” at paragraph 0113, line 1); and at least one processor configured to access the memory and execute the code (“The memory 994 stores data being read and written by the processor 993” at paragraph 0114, line 8), wherein the code comprises instructions for the at least one processor to generate an input image by preprocessing an image (“The acquiring step S102 may include preprocessing of the image to a size and/or format required by the generator neural network 340” at paragraph 0090, last sentence). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to utilize a preprocessing step as taught by Hida on the images of Malreddy et al. to ensure that the images are in the proper format for the U-net classification. Regarding claim 11, Malreddy et al. discloses an image processing method, comprising: generating an input image, based on detecting the part from the image (“The vehicle component segmentation can be classified into six classes including a vehicle left front door, a vehicle right front door, a vehicle left front fender, a vehicle right front fender, a vehicle hood and a background” at paragraph 0073, line 7), applying the input image to an image analysis model to obtain a defect type and a defect size of the part detected from the image (“Damage severity classification can be classified for each vehicle component segmentation class according to one of undamaged, mildly damaged and extremely damaged by cropping each vehicle component along with its corresponding context from the obtained segmentation” at paragraph 0073, last sentence; “For example, the system 10 can determine whether the location of the damage includes at least one of a front of the vehicle (e.g., a hood or windshield) in step 80, a rear of the vehicle (e.g., a bumper and trunk) in step 82 and/or a side of the vehicle (e.g., a passenger door) in step 84. In step 86, the system 10 determines a severity classification of the damage sustained by the detected vehicle in the received image. For example, the system 10 can determine whether the sustained damage is minor in step 88, moderate in step 90 or severe in step 92” at paragraph 0057, line 9), and applying the defect type and the defect size to the input image to generate an output image (the output is a labeled image where the defect pixels are identified according to severity, which reflects size and type). Malreddy et al. does not explicitly disclose generating an input image by preprocessing an image around a part. Hida teaches an image processing method in the same field of endeavor of image based defect detection, comprising: generating an input image by preprocessing an image (“The acquiring step S102 may include preprocessing of the image to a size and/or format required by the generator neural network 340” at paragraph 0090, last sentence). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to utilize a preprocessing step as taught by Hida on the images of Malreddy et al. to ensure that the images are in the proper format for the U-net classification. Regarding claims 6 and 16, Malreddy et al. discloses an apparatus and method wherein the defect type comprises: a first defect including a surface scratch of a painted surface area of the part (“A real dataset and a simulated dataset can each illustrate vehicle damage including, but not limited to, superficial damage such as a scratch or paint chip” at paragraph 0053, line 1); a second defect including shapeshifting of the part (“deformation damage such as a dent” at paragraph 0053, line 4); a third defect in which at least a portion of a shape of the part is different from an existing shape of the part (“The severe damage class can include damage indicative of a broken axle, a bent or twisted frame” at paragraph 0053, second to last sentence); and a fourth defect including a gap generated in a joint of the part (“The minor damage class can include damage indicative of a scratch, a scrape, a ding, a small dent, a crack in a headlight, etc. The moderate damage class can include damage indicative of a large dent, a deployed airbag” at paragraph 0053, third to last sentence). Regarding claims 9, 10 and 19, Malreddy et al. discloses an apparatus and method further comprising: generating the image analysis model including a plurality of pooling layers and a plurality of unpooling layers, based on the image analysis model being a U-Net neural network; and training the image analysis model, using a loss function defined as a sum of losses between the input image and the output image, wherein the plurality of pooling layers are connected with non-linearity of a rectified linear unit (ReLU) function included in the image analysis model, and wherein the image analysis model includes a skip connection from the plurality of pooling layers to the plurality of unpooling layers (“Alternatively, segmentation processing can be performed with a U-Net-CNN. It is noted that a U-Net-CNN works well with small datasets. Advantageously, the segmentation processing provides for identifying a damaged vehicle component instead of a damaged vehicle region in two steps via vehicle component segmentation and damage severity classification” at paragraph 0073, line 1; “The system 10 can use an error metric, such as the per-pixel cross-entropy loss function, to measure the error of the neural network 16. The cross-entropy loss function evaluates class predictions for each pixel vector individually and then averages over all pixels” at paragraph 0065, second to last sentence; Figure 24A shows the pooling and unpooling layers of the U-net, the ReLU layers and the skip connections, which are referred to as copy and crop). Claim(s) 2, 5, 12 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Malreddy et al. and Hida as applied to claims 1 and 11 above, and further in view of Hyatt et al. (US 2023/0281791). Regarding claims 2 and 12, the Malreddy et al. and Hida combination discloses an apparatus and method wherein the code comprises instructions for the at least one processor to generate the input image by use of a cropped image including a selected area, wherein the selected area includes all portions of the detected part (“Damage severity classification can be classified for each vehicle component segmentation class according to one of undamaged, mildly damaged and extremely damaged by cropping each vehicle component along with its corresponding context from the obtained segmentation” at paragraph 0073, last sentence). The Malreddy et al. and Hida combination does not explicitly disclose applying normalization to pixel values included in the cropped image depending on a selected pixel interval to change the pixel values. Hyatt et al. teaches an apparatus and method wherein the code comprises instructions for the at least one processor to generate the input image by use of a cropped image including a selected area, and apply normalization to pixel values included in the cropped image depending on a selected pixel interval to change the pixel values (“Input module 312 may include an image processing component to process an inspection image prior to it being received at the classification component. The image processing component may modify statistics of input data (e.g., images). For example, changes to input images (namely to inspection images 5) may be performed by input module 312, based on prior processing of reference images (e.g., images 4, 4′ and 4″). Changes to parameters of input images may include, for example, adjusting coefficients or changing pixel values for normalization, colorization, contrast, balance, lighting-fixing, as well as, aligning, cropping, warping an image or parts of it to specific coordinates” at paragraph 0085, line 1). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to utilize the image adjustment as taught by Hyatt et al. to generate the input images of the Malreddy et al. and Hida combination to provide input images to the classifier in a better form for performance. Regarding claims 5 and 15, the Malreddy et al. and Hida combination discloses the elements of claims 1 and 11 as described above. The Malreddy et al. and Hida combination does not explicitly disclose that the at least one processor to adjust one of or any combination of a color feature of the input image, an edge feature of the input image, a polygon feature of the input image, a saturation feature of the input image, a color temperature feature of the input image, a definition feature of the input image, a contrast feature of the input image, a blur feature of the input image, and a brightness feature of the input image, and apply the adjusted image to the image analysis model to obtain the defect type and the defect size. Hyatt et al. teaches an apparatus and method wherein the code comprises instructions for the at least one processor to adjust one of or any combination of a color feature of the input image, an edge feature of the input image, a polygon feature of the input image, a saturation feature of the input image, a color temperature feature of the input image, a definition feature of the input image, a contrast feature of the input image, a blur feature of the input image, and a brightness feature of the input image (“Input module 312 may include an image processing component to process an inspection image prior to it being received at the classification component. The image processing component may modify statistics of input data (e.g., images). For example, changes to input images (namely to inspection images 5) may be performed by input module 312, based on prior processing of reference images (e.g., images 4, 4′ and 4″). Changes to parameters of input images may include, for example, adjusting coefficients or changing pixel values for normalization, colorization, contrast, balance, lighting-fixing, as well as, aligning, cropping, warping an image or parts of it to specific coordinates” at paragraph 0085, line 1). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to utilize the image adjustment as taught by Hyatt et al. to generate the input images of the Malreddy et al. and Hida combination to provide input images to the classifier in a better form for performance. Claim(s) 7, 8, 17 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Malreddy et al. and Hida as applied to claims 1 and 11 above, and further in view of Chen et al. (US 10,706,321). Regarding claims 7 and 17, the Malreddy et al. and Hida combination discloses the elements of claims 1 and 11 as described above. The Malreddy et al. and Hida combination does not explicitly disclose calculating availability of the part and a range of breakage of the part, based on the obtained defect size. Chen et al. teaches an apparatus and method wherein the code comprises instructions for the at least one processor to calculate availability of the part and a range of breakage of the part, based on the obtained defect size (“With further regard to the block 1608, in some implementations, determining or identifying one or more parts needed to repair the vehicle includes selecting one or more parts that are needed to repair the vehicle based on an availability of parts, either individually and/or as an integral unit. For example, consider a scenario in which the set of damage parameter values generated at the block 1605 indicates that a driver door, driver door handle, driver door window, and mechanism for raising and lowering the window are damaged and need to be replaced” at col. 33, line 35; “At a block 1605, the method 1600 includes applying an image processor to the source images to thereby generate a set of values corresponding to the damage of the target vehicle (e.g., to generate a set of damage parameter values). For example, one or more of the blocks 202-208 of the system 200 may be applied to the obtained source images of the damaged vehicle to determine occurrences of damage at various locations, points, or areas of the vehicle, and to generate a set of damage values corresponding to the determined damage. Each value included in the generated set of values may respectively correspond to respective damage at a respective location of the damaged vehicle. A particular damage value may correspond to a degree of severity of respective damage at a particular location on the target vehicle, a level of accuracy of the degree of severity of the respective damage at the particular location on the target vehicle, a type of damage at the particular location on the target vehicle, or a likelihood of an occurrence of any damage at the respective location on the target vehicle, for example” at col. 31, line 15). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to utilize part availability determination as taught by Chen et al. in the system of the Malreddy et al. and Hida combination as a way to “estimate repair costs, estimate the amount of change that has occurred in the object, estimate the amount of time or effort needed to correct or fix the change (Chen et al. at col. 2, line 62). Regarding claims 8 and 18, the Malreddy et al. and Hida combination discloses the elements of claims 1 and 11 as described above. The Malreddy et al. and Hida combination does not explicitly disclose providing an interface to receive the image from a parts supplier supplying used parts and provide the parts supplier with the output image, the defect type, and the defect size stored in an image storage server through the interface. Chen et al. teaches an apparatus and method wherein the code comprises instructions for the at least one processor to provide an interface to receive the image from a parts supplier supplying used parts and provide the parts supplier with the output image, the defect type, and the defect size stored in an image storage server through the interface (“At any rate, at a block 1610, the method 1600 includes generating an indication of the one or parts needed to repair the vehicle, e.g., as determined/identified at the block 1608. At a block 1612, the indication is provided to at least one of a user interface or to another computing device. In some scenarios, providing the indication of the one or more parts needed to repair the vehicle (block 1612) includes ordering the one or parts needed to repair the vehicle. For example, the indication of the one or more parts needed to repair the vehicle may be electronically transmitted to a parts ordering system, a parts procurement system, or a parts store” at col. 33, line 62). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to utilize part availability determination as taught by Chen et al. in the system of the Malreddy et al. and Hida combination as a way to “estimate repair costs, estimate the amount of change that has occurred in the object, estimate the amount of time or effort needed to correct or fix the change (Chen et al. at col. 2, line 62). Claim(s) 3 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Malreddy et al., Hida and Hyatt et al. as applied to claims 2 and 12 above, and further in view of Noda et al. (US 2019/0205668). The Malreddy et al., Hida and Hyatt et al. combination discloses the elements of claims 2 and 12 as described above. The Malreddy et al., Hida and Hyatt et al. combination does not explicitly disclose changing a channel of the cropped image, the pixel values of which are changed, to a first channel or a second channel, changing a plurality of pixel values included in the cropped image, the pixel values of which are changed, to a one-dimensional array in an order of channel features included in the first channel changes to the first channel including a plurality of channels, based on the channel of the cropped image, the pixel values of which are changed. Noda et al. teaches an apparatus and method wherein the code comprises instructions for the at least one processor to: change a channel of the image, the pixel values of which are changed, to a first channel or a second channel, change a plurality of pixel values included in the image, the pixel values of which are changed, to a one-dimensional array in an order of channel features included in the first channel changes to the first channel including a plurality of channels, based on the channel of the image, the pixel values of which are changed (“The image data input the neural network may also be an R, G, B color image, or an image resultant of a color space conversion, such as a Y, U, V color image. Furthermore, the image input to the neural network may be a one-channel image resultant of converting the color image into a monochromatic image. Furthermore, instead of inputting the image as it is, assuming that an R, G, B color image is to be input, for example, the neural network may also receive an image from which an average pixel value in each channel is subtracted, or a normalized image from which an average value is subtracted and divided by a variance, as an input. Furthermore, a captured image corresponding to some point in time, or a part thereof may be also input to the neural network. It is also possible to input a captured image including a plurality of frames corresponding to several points in time with reference to one point in time, or a part of each captured image including a plurality of frames may be input to the neural network” at paragraph 0042). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to utilize the color conversion as taught by Noda et al. on the input images of the Malreddy et al., Hida and Hyatt et al. combination to provide input images to the classifier in a better form for performance. Claim(s) 4 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Malreddy et al., Hida, Hyatt et al. and Noda et al. as applied to claims 3 and 13 above, and further in view of Price et al. (US 10,970,835). The Malreddy et al., Hida, Hyatt et al. and Noda et al. combination discloses the elements of claims 3 and 13 as described above. The Malreddy et al., Hida, Hyatt et al. and Noda et al. combination does not explicitly disclose concatenating the image, the input image, and the output image, and transmit the concatenated image to an image storage server. Price et al. teaches an apparatus and method in the same field of endeavor of vehicle damage identification, wherein the code comprises instructions for the at least one processor to concatenate the image and vehicle information, and transmit the concatenated data to an image storage server (“In a third implementation, alone or in combination with one or more of the first and second implementations, process 500 may include storing, in a data structure, information identifying the damaged part of the vehicle and the information regarding the damage on the vehicle in association with the information identifying the vehicle, receiving another plurality of images of the vehicle, and identifying, in the other plurality of images, a location of the damaged part in one or more of the other plurality of images based on the information identifying the damaged part of the vehicle” at col. 20, line 48). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to utilize the data consolidation as taught by Price et al. in the system of the Malreddy et al., Hida, Hyatt et al. and Noda et al. combination to ensure all the relevant damage data for the vehicle is contained in a central location. Claim(s) 20 is rejected under 35 U.S.C. 103 as being unpatentable over the combination of Malreddy et al. and Chen et al. Malreddy et al. discloses an image processing method, comprising: receiving from an image processing apparatus one of or any combination of an image including a part of a vehicle, a defect type of the part of the vehicle, and a defect size of the part of the vehicle, wherein one of or both of the defect type and the defect size are obtained by applying the image to an image analysis model (“Damage severity classification can be classified for each vehicle component segmentation class according to one of undamaged, mildly damaged and extremely damaged by cropping each vehicle component along with its corresponding context from the obtained segmentation” at paragraph 0073, last sentence; “For example, the system 10 can determine whether the location of the damage includes at least one of a front of the vehicle (e.g., a hood or windshield) in step 80, a rear of the vehicle (e.g., a bumper and trunk) in step 82 and/or a side of the vehicle (e.g., a passenger door) in step 84. In step 86, the system 10 determines a severity classification of the damage sustained by the detected vehicle in the received image. For example, the system 10 can determine whether the sustained damage is minor in step 88, moderate in step 90 or severe in step 92” at paragraph 0057, line 9; severity reflects size and type). Malreddy et al. does not explicitly disclose providing an interface to receive a query about information of a used part from a parts supplier supplying used parts and providing the parts supplier with one of or any combination of the defect type, the defect size, and the image, based on receiving the query about the information of the used part through the interface. Chen et al. teaches a method, comprising: providing an interface to receive a query about information of a used part from a parts supplier supplying used parts; and providing the parts supplier with one of or any combination of the defect type, the defect size, and the image, based on receiving the query about the information of the used part through the interface (“At any rate, at a block 1610, the method 1600 includes generating an indication of the one or parts needed to repair the vehicle, e.g., as determined/identified at the block 1608. At a block 1612, the indication is provided to at least one of a user interface or to another computing device. In some scenarios, providing the indication of the one or more parts needed to repair the vehicle (block 1612) includes ordering the one or parts needed to repair the vehicle. For example, the indication of the one or more parts needed to repair the vehicle may be electronically transmitted to a parts ordering system, a parts procurement system, or a parts store” at col. 33, line 62). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to utilize part availability determination as taught by Chen et al. in the system of Malreddy et al. as a way to “estimate repair costs, estimate the amount of change that has occurred in the object, estimate the amount of time or effort needed to correct or fix the change (Chen et al. at col. 2, line 62). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to KATRINA R FUJITA whose telephone number is (571)270-1574. The examiner can normally be reached Monday - Friday 9:30-5:30 pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sumati Lefkowitz can be reached at 5712723638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KATRINA R FUJITA/Primary Examiner, Art Unit 2672
Read full office action

Prosecution Timeline

Nov 13, 2023
Application Filed
Nov 10, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597250
DETECTION OF PLANT DETRIMENTS
2y 5m to grant Granted Apr 07, 2026
Patent 12582476
SYSTEMS FOR PLANNING AND PERFORMING BIOPSY PROCEDURES AND ASSOCIATED METHODS
2y 5m to grant Granted Mar 24, 2026
Patent 12585698
MULTIMEDIA FOCALIZATION
2y 5m to grant Granted Mar 24, 2026
Patent 12586190
SYSTEM AND METHOD OF CLASSIFICATION OF BIOLOGICAL PARTICLES
2y 5m to grant Granted Mar 24, 2026
Patent 12566341
PREDICTING SIZING AND/OR FITTING OF HEAD MOUNTED WEARABLE DEVICE
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
70%
Grant Probability
94%
With Interview (+24.0%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 674 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month