Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
2. This is the initial Office Action based on the application filed on September 25, 2024. The Examiner acknowledges the following:
3. Claims 1 – 40 were initially filed.
4. A preliminary amendment was filed by Applicant on the same date, canceling claims 21 – 40.
4.A. The specification was amended on the same date as to include the cross-related patent applications related to the present application.
5. The drawings filed on 09/25/2024 are accepted by the Examiner.
6. Current claims 1 – 40 are pending. Claims 21 – 40 were canceled by Applicant; therefore, claims 1 – 20 are being considered for examination.
Information Disclosure Statement
7. The IDS documents filed on filed on 01/02/2025 and 05/29/2025 are acknowledged by the Examiner.
Priority
8. Priority data is based on a European patent application EP-23306783.4 of 10/12/2023. Certified copies were filed to the office on11/22/2025.
Claim Rejection under 35 U.S.C. 112(b)
9. The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Regarding Claims 1 – 20:
Claims 1, 14 and 18 recite “ …obtaining a three-dimensional (3D) model of the vehicle” – what does Applicant mean for obtaining a 3D model of a vehicle? – is it a miniature car, a car prototype, a picture of a car, a perspective drawing or image of a car, a type of a vehicle such as an automobile, a truck, a pickup or the like, a program modeling a car or a description of a car?
“…generating, from the 3D model of the vehicle, a part-annotated 3D model of the vehicle;” – is it creating a drawing or a prototype for each part of a car where each one has a part number, a marker or identification?
“ obtaining information specifying portions of the vehicle to be imaged by the user;” – how that can happen? Do a user have some sort of a screen/LCD showing a car picture with segments of the image detailing each part/portion with an ID or part number annotation associated with a detailed description of the aforementioned part/portion? Is it just a vision/visional of it?
”generating, using the part-annotated 3D model of the vehicle and information specifying the portions of the vehicle to be imaged by the user” – is it like a game wherein a user matches some part of a car in the correct position as to compose a complete image?
“overlays corresponding to the portions of the vehicle to be imaged by the user; and outputting the generated overlays for use in guiding the user in capturing images of the portions of the vehicle.” – is it to capture an image and verify if the program used includes a similar part and try to match to verify whether the user has the right car part or right format?
What does Applicant mean for obtaining a 3D model of a vehicle? What about generating a part-annotated 3D model? Is it creating a drawing or a prototype for each part of a car where each one has a part number, a marker or identification? What about obtaining information specifying portions of the vehicle? The language is quite confusing and it does not point out what the inventor(s) envision as their invention.
Claims 1, 14 and 18 are rejected under 35 U.S.C. 112(b) as for failing to clearly disclose what Applicant is trying to pursue as his invention. The claim language is confusing, and it does not help the one with the ordinary skill in the art to be able to get anywhere based on the claim disclosure.
Claims 2 – 13, 15 – 17 and 19 – 20 are rejected under the same rationale as discussed above based on its direct or indirect dependence to a rejected claim.
Claim Rejections - 35 USC § 102
10. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1 – 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by “Kaigang Li et al., US 2018/0260793 A1, hereinafter Li”. (Note: art from the IDS)
Note: The rejection under 35 U.S.C. 112(b) is considered in the rejections below.
Regarding Claim 1, 14 and 18:
Li teaches a method, comprising: receiving, at a server computing device over an electronic network, one or more images of a damaged vehicle from a client computing device; performing computerized image processing based on the one or more images to generate one or more damage detection images, wherein each damage detection image is a two-dimensional (2D) image that includes indications of areas of damage to the vehicle in the damage detection image; mapping the one or more damage detection images to a three-dimensional (3D) model of the vehicle to generate a damaged 3D model that indicates area of the vehicle that are damaged; and calculating an estimated repair cost for the vehicle based on the damaged 3D model, wherein the damaged 3D model is segmented into vehicle parts, and calculating the estimated repair cost is based on accessing a parts database that includes repair or replacement costs for damaged vehicle parts, wherein said database of repair costs includes estimates for parts and labor for individual vehicle parts, wherein the damaged 3D model is segmented into vehicle parts, and wherein calculating the estimated repair cost comprises determining that a vehicle part should be replaced when a damaged area of the vehicle part is above a threshold, wherein the damaged 3D model is segmented into vehicle parts, and wherein calculating the estimated repair cost comprises determining that a vehicle part should be repaired when a damaged area of the vehicle part is below a threshold and the method, further comprising: performing pre-processing on the one or more images to remove images that show an interior of the vehicle, wherein performing computerized image processing includes: determining, for a given image of the one or more images, which external parts of the vehicle are visible in the given image; and performing, for the given image, instance segmentation to identify an outline of the vehicle in the given image and remove other parts of the given image and, wherein performing, for the given image, instance segmentation comprises: identifying a portion of the given image that includes the vehicle using Multi-task Network Cascades (MNC); and refining the portion of the given image that includes the vehicle using an edge-map detected from a Structured Random Forest (SFR) and, wherein performing computerized image processing includes: determining, for a given image of the one or more images, which external parts of the vehicle are visible in the given image based on a pose of the vehicle relative to a camera that captured the given image; and for the given image, detecting damaged area of the given image by processing each of the external parts of the vehicle that is visible through a respective Convolutional Neural Network (CNN) corresponding to the external part.
Regarding Claim 1:
Li teaches,
A method (Li, claims 1 – 12 disclose a method for guiding and obtaining 3D images of a vehicle) for use in connection with guiding a user in capturing one or more images of a vehicle (Fig 19C shows a user selection of the outline 1902 of the 3d model 1802, the interface in Fig 19A is displayed. A 3D model of the user's vehicle is displayed and the user is prompted to tap on the portion of the vehicle that is damaged. For example, the user may tap on the hood of the vehicle, which causes an interface such as the one shown in Fig 19B to be displayed. If the user selects “Yes” in the prompt in Fig 19B, the interface in Fig 19C may be displayed. As seen in Fig 19C, an outline 1902 is displayed for the hood of the vehicle superimposed/overlayed on a live camera view from the client device. The user can then position the camera of the client device so that the hood of the car aligns with the outline 1902. Once the hood of the car aligns with the outline 1902, a photo is captured, either automatically by the camera or manually by the user selecting a capture button. The user can be prompted in this manner to capture photos of all damaged parts using a 3d model of the vehicle. See [0159]), the method comprising:
using at least one computer hardware processor (Li, Fig 1, adjusting computing device 106. See [0070; 0071]) to perform: obtaining a three-dimensional (3D) model of the vehicle (Li, claim 1, mapping the detection images to a three-dimensional 3D model indicating an area/portion of the vehicle that was damaged);
generating, from the 3D model of the vehicle, a part-annotated 3D model of the vehicle (Li Fig 19C ([0159]) shows a part/hood annotated as part of the 3D model. Li, claim 2 shows that the 3D model is segmented into vehicle annotated parts). Fig 18C shows the 3D model 1802, a list of the car parts list 1804 and the vehicle views 1806 See [0158]);
obtaining information specifying portions of the vehicle to be imaged by the user (Li, Fig 19B that allows the user to access interface outline 1902 and Fig 19C shows the 3D model of the user’s car with a specific part “car hood” that is to be imaged. See [0159]);
generating, using the part-annotated 3D model of the vehicle and information specifying the portions of the vehicle to be imaged by the user, overlays corresponding to the portions of the vehicle to be imaged by the user; and outputting the generated overlays for use in guiding the user in capturing images of the portions of the vehicle (Fig 19C, an outline 1902 id displayed for the car hood, which is overlayed on a live camera view from the user device. The user can position the camera of his device in order to align with the outline 1902 and when the car hood is aligned with the outline 1902, he/she can capture an image manually or automatically of the damaged car hood and other car parts using a 3D model of the vehicle. See [0159]).
Regarding Claim 14:
The rejection of claim 1 is incorporated herein. Claim 14 pertains to a system which can be operated by the method disclosed in claim 1. Most of the limitations of claim 14 were already discussed in the claim 1 rejection. For the additional limitations, Li teaches,
A system for use in connection with guiding a user in capturing one or more images of a vehicle (Fig 1 shows a system 100 that includes a server/cluster 102, one or more client/user devices 104-1 to 104-n. an adjuster computer device 106, a network connections 108. See [0070]), the system comprising: at least one computer hardware processor (Fig 1, adjuster computing device 106. See [0070]); and at least one non-transitory computer-readable storage medium (Fig 2, storage device 208, which stores large amounts of information and includes EPROM or EEPROM memories. See [0076]) storing processor-executable instructions that, when executed by the at least one computer hardware processor, cause the at least one computer hardware processor to perform:
Regarding Claim 18:
The rejection of claims 1 and 14 is incorporated herein. Claim 18 includes similar limitations as discussed in claim 1 and claim 14 rejections. As for the non-transitory computer readable storage medium, Fig 2 shows storage device 208 (See [0076]) and Fig 3, shows storage device(s) 308 (See [0088]).
Regarding Claim 2:
The rejection of claim 1 is incorporated herein. As for claim 2 limitations, Li teaches that the background removal can be performed (See [0099]) with image segmentation using Conditional Random Fields (CRF) realized as Recurrent Neural Networks (RNN).
Regarding Claim 3:
The rejection of claims 1 and 2 is incorporated herein. As for claim 3 limitations, Li teaches
wherein generating the part-annotated 3D model of the vehicle comprises: generating multiple viewpoints (by using the 3D model of the same car, it may include multiple images of the same vehicle from different viewpoints. Although each image is analyzed independently, the evaluation results can be integrated together, which is referred to herein as “image fusion”. See [0341]); generating, using the 3D model of the vehicle, multiple renderings of the vehicle corresponding to the multiple viewpoints (the disclosed system is able to generate a heatmap, for each image, to indicate the region where the vehicle is damaged in the image. By fusing the 2D heatmaps from different images, embodiments of the disclosure are able to enhance the damage assessment, for example, in case the heatmap for one image is not satisfactory. To do so, embodiments of the disclosure utilize the 3D model of the vehicle, and map each heatmap into a common 3D space, leading to a 3D version of a heatmap that is convenient for damage appraisal. In some aspects, image fusion can be thought of “wrapping” the heatmaps of the 2D images onto the 3D model.; identifying vehicle parts in the multiple renderings by using the trained deep neural network model (Also, a label can be put onto the damaged part. Some embodiments then project the images onto the 3D model of the vehicle using the camera angles determined during the alignment process. The 3D model then shows the damage to the vehicle in an integrated manner. The adjustor can rotate and zoom in on the 3D model as desired (See [0164]). A pre-trained deep neural network is used to extract features from both a 3D model and the image to be aligned. See [0289]); and generating the part-annotated 3D model using the vehicle parts identified in the multiple renderings (When the adjustor clicks on a damaged part, the interface may show all the original images that contain that part on the side, so that the adjustor can easily examine in the original images where the damage was identified using multiple renderings. See [0164]).
Regarding Claim 4:
The rejection of claims 1 and 2 is incorporated herein. As for claim 4 limitations, Li teaches that to leveraged the power of deep learning on this task, some embodiments use an encoder-decoder shaped neural network for heatmaps prediction. The encoder part of this network is composed of a series of convolutional layers and intermediate max pooling layers. The output is a down-sampled feature map extracted from the input image. Following the down-sampling encoder network, there is an up-sampling decoder network. A series of transposed convolutional layers, also called deconvolutional layers, are applied to up sample the feature maps. Embodiments of the disclosure call the layer that connects the encoder decoder networks the bottleneck layer, because it has the smallest input and output size. Several skip connection layers are bridged between the encoder network and the decoder network, to merge the spatially rich information from low-level features in the encoder network, with the high-level object knowledge in the decoder network (See [0320]).
Regarding Claims 5 – 6:
The rejection of claims 1 and 3 is incorporated herein. As for claim 5 limitations, Li Fig 27 in steps 2704-2706-2708-2710-2712, embodiments of the disclosure have shown how to analyze damaged parts from single captured image. This process 2704-2706-2708-2710-2712 can be repeated for each input image. For example, in the context of an auto claim, the most claim cases include multiple images of the same vehicle from different viewpoints. Although each image is analyzed independently, the evaluation results can be integrated together, which is referred to herein as “image fusion.” For example, the disclosed system is able to generate a heatmap, for each image, to indicate the region where the vehicle is damaged in the image (See [0341]). By fusing the 2D heatmaps from different images, embodiments of the disclosure are able to enhance the damage assessment, for example, in case the heatmap for one image is not satisfactory. To do so, embodiments of the disclosure utilize the 3D model of the vehicle, and map each heatmap into a common 3D space, leading to a 3D version of a heatmap that is convenient for damage appraisal. In some aspects, image fusion can be thought of “wrapping” the heatmaps of the 2D images onto the 3D model. The CNN uses a plurality of parameters for the 3D model as for identifying for example, the front bumper (See [0185; 0186 – 0191]). Nothing in Li precludes the CNN network 3D model to include millions of parameters.
Regarding Claims 7 and 8:
The rejection of claim 1 is incorporated herein. As for claims 7 and 8 limitations, Li teaches in Fig 19C that an outline 1902 is displayed for the hood of the vehicle , which is fully visible in the drawing, superimposed on a live camera view from the client device. The user can then position the camera of the client device so that the hood of the car aligns with the outline 1902. Once the hood of the car aligns with the outline 1902, a photo is captured, either automatically by the camera or manually by the user selecting a capture button. The user can be prompted in this manner to capture photos of all damaged parts using a 3D model of the vehicle (See [0159]). Additionally, Li teaches that learning for the 3D model, is done by changing parameters of the system, which include camera parameters, until the system outputs results as close to the desired outputs as possible. Once such a machine system has learned the input-output relationship from the training data, the machine learning system can be used to predict the output upon receiving a new input for which the output may not be known. The larger the training data set and the more representative of the input space, the better the machine learning system performs on the prediction task as to achieve a degree of accuracy (See [0171; 0177]). Also, for claim 8, the one or more camera parameters can include camera position, camera pan/tilt angle, focal length and field of view FOV, zoom, etc. (See [0335]).
Regarding Claim 9:
The rejection of claim 1 is incorporated herein. As for claim 9, Li teaches
“overlays corresponding to the portions of the vehicle to be imaged by the user (Fig 19 C. See [0159])comprises: for each particular portion of the vehicle to be imaged and using the information specifying portions of the vehicle (Fig 19C, overlay 1902 shows the car hood. See [0159]), generating vehicle-specific camera parameters using the information specifying portions of the vehicle; determining boundaries of one or more parts of the particular portion of the vehicle using the vehicle-specific camera parameters (Contour/boundary curves are generated as close as possible of the contour of the vehicle or vehicle damaged parts in the images as possible. See [0105; 0106; 0131]; and generating an overlay based on the determined boundaries”.
Regarding Claims 10 – 11:
The rejection of claims 1 and 9 is incorporated herein. Li teaches the overlay of the outline 1902 (See [0159]). An active contour technique starts with a user-supplied contour/ initial containing the vehicle within the photo/picture/image is evolved along the gradient of the energy function until the gradient becomes zero, i. e., when the energy function has achieved an extreme external value (See [0106; 0107]). As for removal of artifacts from the images (See [0116]) such as background, specular reflections, which does not preclude the removal of isolated or redundant curves.
Regarding Claim 12:
The rejection of claim 1 is incorporated herein. As for claim 12 limitations, Li teaches that a smartphone device or a mobile can be used to capture the images (See [0057 – 0059]).
Regarding Claim 13:
The rejection of claims 1 and 12 is incorporated herein. As for claim 13 limitations, Li teaches, as discussed for claim 1 “guiding the user, using some of the generated overlays using a software application that is executed on the client/user mobile device (See [0057 – 0059]) as to capture some images of a portion of his car, car hood, which was damaged and wherein the user can position the camera of his device in order to align the car hood with the outline 1902 as seen in Fig 19C and after the car hood is aligned with the outline 1902, an image is captured either manually by the user selecting a capture button on his device or automatically by the camera (See [0159]).
Regarding Claim 15:
The rejection of claims 1, 14 and claim 3 is incorporated herein. Claim 15 has the same scope as claim 3 but as applied to claim 14 instead and it includes similar limitations as claim 3. Therefore, claim 15 is rejected under the same rationale as claim 3. See claim 3 rejections for more details.
Regrading Claim 16:
The rejection of claims 1, 14 and claim 4 is incorporated herein. Claim 16 has the same scope as claim 4 but as applied to claim 14 instead and it includes similar limitations as claim 4. Therefore, claim 16 is rejected under the same rationale as claim 4. See claim 4 rejections for more details.
Regrading Claim 17:
The rejection of claims 1, 14 and claim 5 is incorporated herein. Claim 17 has the same scope as claim 5 but as applied to claim 14 instead and it includes similar limitations as claim 5. Therefore, claim 17 is rejected under the same rationale as claim 5. See claim 5 rejections for more details.
Regarding Claim 19:
The rejection of claims 1, 18 and claim 3 is incorporated herein. Claim 19 has the same scope as claim 3 but as applied to claim 18 instead and it includes similar limitations as claim 3. Therefore, claim 19 is rejected under the same rationale as claim 3. See claim 3 rejections for more details.
Regrading Claim 20:
The rejection of claims 1, 18 and claim 5 is incorporated herein. Claim 20 has the same scope as claim 5 but as applied to claim 18 instead and it includes similar limitations as claim 5. Therefore, claim 20 is rejected under the same rationale as claim 5. See claim 5 rejections for more details.
Contact
11. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARLY S.B. CAMARGO whose telephone number is (571)270-3729. The examiner can normally be reached on M-F 8:00-5:00 PM.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lin Ye can be reached on (571)270-7372. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MARLY S CAMARGO/Primary Examiner, Art Unit 2638