DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-19 are rejected under 35 U.S.C. 101 Abstract idea.
35 U.S.C. 101 requires that a claimed invention must fall within one of the four eligible categories of invention (i.e. process, machine, manufacture, or composition of matter) and must not be directed to subject matter encompassing a judicially recognized exception as interpreted by the courts. MPEP 2106. Three categories of subject matter are found to be judicially recognized exceptions to 35 U.S.C. § 101 (i.e. patent ineligible) (1) laws of nature, (2) physical phenomena, and (3) abstract ideas. MPEP 2106(II). To be patent-eligible, a claim directed to a judicial exception must as whole be directed to significantly more than the exception itself. See 2014 Interim Guidance on Patent Subject Matter Eligibility, 79 Fed. Reg. 74618, 74624 (Dec. 16, 2014). Hence, the claim must describe a process or product that applies the exception in a meaningful way, such that it is more than a drafting effort designed to monopolize the exception. Id
Claims 1-19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., an abstract idea) without significantly more. Claim(s) are directed to creating a texture based on scanned and processed object image. The concept of obtaining an object image, processing the parts of the image and creating a texture is similar to mental process performed with use of pen and paper. A user may obtain an image of an object visually process the image and draw a pattern of a texture of various parts of the object. The use of machine learning algorithm does not add significantly more than well-known general purpose algorithms. (SmartGene, Inc. v. Advanced Biological Laboratories, SA). Therefore, claims 1 and 17 are rejected.
The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Dependent claims do not add significantly more than filling in portions of object with missing scans with similar texture of other sections of objects, obtaining feature maps of the objects, geographical position of the objects, and determining distorted portions of the image which are also similar to be carried out as mental process and with use of general purpose machine learning algorithms and do not add significantly more to the features. Therefore, dependent claims 2-16 and 18-19 are similarly rejected.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
Claims 17-19 are interpreted to invoke 35 U.S.C. 112(f).
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “a scanning device” and “a central station” in claim in claims 17-19.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. The original specification paragraph [0021-0022] and [0026] discloses hardware structure(s) or processor(s) are equivalent thereof to execute the functions by the device and station.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-19 are rejected under 35 U.S.C. 103 as being unpatentable over Chen et al. (DE 102019205474 A1, as provided) in view of Tang et al. (US Pub No. 20220101508 A1).
Regarding Claim 1,
Chen discloses A method comprising: scanning an environment of a motor vehicle; (Chen, Description, discloses in addition to the primary and secondary sensor setup 20th ,30th Another sensor device, such as a thermal camera, can be used to monitor the surroundings 60 of the vehicle 50 recorded and generated third environment data. In the case of a thermal camera, the third environmental data include thermal data from the surrounding area 60 located objects; object image is captured with camera sensor around the vehicle)
recognizing a scanned object in the environment; (Chen, Description, discloses object recognition plays a central role in automated driving. Objects that are in the vicinity of a self-driving or partially self-driving vehicle must be recognized with a high degree of accuracy and timeliness. This is used to take measures to avoid or at least reduce a collision with such objects, such as other vehicles, road users, people, animals, buildings, plantings, public and private transport facilities ... etc.; Compared to the punctual information generated by the primary sensor device such as radar or lidar sensors, which are only correlated with one another in a group assignment process (clustering), more relationships between the individual pixels can be detected in the second environment data without complex data processing. In particular, one can benefit from the surface texture information that is to be extracted from the second environment data. The second environment data are thus used to optimize or verify the object recognition by means of the first environment data. The object recognition is therefore based on a more extensive information content of the detected objects, which increases the reliability of the object recognition; object in captured image is recognized being other vehicles, people, animals, buildings etc.)
determining a texture of the scanned object; (Chen, Description, discloses secondary sensor device is, for example, a camera, for example an RGB camera, a stereo camera and / or a surround view camera. The secondary sensor device is preferably designed to detect a surface texture (e.g. color, surface normal, roughness) of detected objects. The surface texture is analyzed by the evaluation unit, from which material class data is extracted. The material class data contain one or more material classes of the objects detected by the secondary sensor device. The material class data are finally combined with the first surrounding data to form a common data record; texture of detected and recognized object is determined by image analysis of the object) and
Chen does not explicitly disclose creating a textured representation of the scanned object.
Tang discloses creating a textured representation of the scanned object. (Tang, [0006], discloses the functional units comprise at least an image processor configured for segmenting one or more input vehicle images; a texture encoder configured for generating texture feature maps from the segmented vehicle images; a frame extractor configured for extracting vehicle frame images from the segmented vehicle images; an intact-vehicle frame (IVF) reconstructor configured for reconstructing IVF images from vehicle frame images extracted by the frame extractor; an image reconstructor configured for reconstructing intact-vehicle images from reconstructed IVF images generated by the IVF reconstructor and texture feature maps generated by the texture encoder; an anomaly locator configured for comparing processed vehicle images and their corresponding reconstructed intact-vehicle images to detect anomaly regions; a frame encoder discriminator configured for generating a score of likelihood between two IVF feature maps; and an image discriminator configured for generating a score of likelihood between the input vehicle image and the reconstructed vehicle image; texture feature maps of vehicle object image are obtained and represented as image)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Chen in view of Tang having a method of detecting texture regions in a scanned object image with the teachings of Tang having, by system, encoding texture detected from an object image and render by creating similar texture in an image to depict any changes in the object image or determining specific features by texture patterns in an object image to improve detection of object in applications including vehicle damage where texture patterns define extent of damage to the vehicle object image.
Regarding Claim 2,
The combination of Chen and Tang further discloses wherein a plurality of scans of the scanned object are collected; and wherein the textured representation of the scanned object is determined based on the scans. (Tang, [0006], discloses the functional units comprise at least an image processor configured for segmenting one or more input vehicle images; a texture encoder configured for generating texture feature maps from the segmented vehicle images; a frame extractor configured for extracting vehicle frame images from the segmented vehicle images; an intact-vehicle frame (IVF) reconstructor configured for reconstructing IVF images from vehicle frame images extracted by the frame extractor; an image reconstructor configured for reconstructing intact-vehicle images from reconstructed IVF images generated by the IVF reconstructor and texture feature maps generated by the texture encoder; an anomaly locator configured for comparing processed vehicle images and their corresponding reconstructed intact-vehicle images to detect anomaly regions; a frame encoder discriminator configured for generating a score of likelihood between two IVF feature maps; and an image discriminator configured for generating a score of likelihood between the input vehicle image and the reconstructed vehicle image; texture feature maps of vehicle object image are obtained and represented as image). Additionally, the rational and motivation to combine the references Chen and Tang as applied in rejection of claim 1 apply to this claim.
Regarding Claim 3,
The combination of Chen and Tang further discloses wherein the scans originate from different motor vehicles. (Tang, [0006], discloses the functional units comprise at least an image processor configured for segmenting one or more input vehicle images; a texture encoder configured for generating texture feature maps from the segmented vehicle images; a frame extractor configured for extracting vehicle frame images from the segmented vehicle images; an intact-vehicle frame (IVF) reconstructor configured for reconstructing IVF images from vehicle frame images extracted by the frame extractor; an image reconstructor configured for reconstructing intact-vehicle images from reconstructed IVF images generated by the IVF reconstructor and texture feature maps generated by the texture encoder; an anomaly locator configured for comparing processed vehicle images and their corresponding reconstructed intact-vehicle images to detect anomaly regions; a frame encoder discriminator configured for generating a score of likelihood between two IVF feature maps; and an image discriminator configured for generating a score of likelihood between the input vehicle image and the reconstructed vehicle image; texture feature maps of vehicle object image are obtained and represented as image). Additionally, the rational and motivation to combine the references Chen and Tang as applied in rejection of claim 1 apply to this claim.
Regarding Claim 4,
The combination of Chen and Tang further discloses wherein the scans show different sections of the scanned object. (Tang, [0006], 0015], FIG. 1, discloses functional units of an apparatus for image-based anomaly detection for vehicle damage assessment in accordance with one embodiment of the present invention. The apparatus may comprise an image processor 101 configured for performing various imaging processing techniques such as image segmentation and augmentation on input images of vehicles; a texture encoder 102 configured for generating texture feature maps from vehicle images processed by the image processor 101; a frame extractor 103 configured for extracting vehicle frame images from vehicle images processed by the image processor 101; an IVF reconstructor 104 configured for reconstructing IVF images from vehicle frame images extracted by the frame extractor 103; an image reconstructor 105 configured for reconstructing intact-vehicle images from the reconstructed IVF images generated by the IVF reconstructor 104 and texture feature maps generated by the texture encoder 102; and an anomaly locator 106 configured for comparing processed vehicle images and their corresponding reconstructed intact-vehicle images to detect anomaly regions; IVF reconstructor 104 may include a frame encoder 1041 configured for mapping extracted vehicle frame images to generate IVF feature maps and a frame decoder 1402 configured for remapping IVF feature maps back to IVF images; the functional units comprise at least an image processor configured for segmenting one or more input vehicle images; a texture encoder configured for generating texture feature maps from the segmented vehicle images; a frame extractor configured for extracting vehicle frame images from the segmented vehicle images; an intact-vehicle frame (IVF) reconstructor configured for reconstructing IVF images from vehicle frame images extracted by the frame extractor; an image reconstructor configured for reconstructing intact-vehicle images from reconstructed IVF images generated by the IVF reconstructor and texture feature maps generated by the texture encoder; an anomaly locator configured for comparing processed vehicle images and their corresponding reconstructed intact-vehicle images to detect anomaly regions; a frame encoder discriminator configured for generating a score of likelihood between two IVF feature maps; and an image discriminator configured for generating a score of likelihood between the input vehicle image and the reconstructed vehicle image; texture feature maps of vehicle object image are obtained and represented as image of different sections of the vehicles and details of sections of vehicles are obtained by edge segmentations). Additionally, the rational and motivation to combine the references Chen and Tang as applied in rejection of claim 1 apply to this claim.
Regarding Claim 5,
The combination of Chen and Tang further discloses wherein the scans show different sections of the scanned object. (Tang, [0006], discloses the functional units comprise at least an image processor configured for segmenting one or more input vehicle images; a texture encoder configured for generating texture feature maps from the segmented vehicle images; a frame extractor configured for extracting vehicle frame images from the segmented vehicle images; an intact-vehicle frame (IVF) reconstructor configured for reconstructing IVF images from vehicle frame images extracted by the frame extractor; an image reconstructor configured for reconstructing intact-vehicle images from reconstructed IVF images generated by the IVF reconstructor and texture feature maps generated by the texture encoder; an anomaly locator configured for comparing processed vehicle images and their corresponding reconstructed intact-vehicle images to detect anomaly regions; a frame encoder discriminator configured for generating a score of likelihood between two IVF feature maps; and an image discriminator configured for generating a score of likelihood between the input vehicle image and the reconstructed vehicle image; texture feature maps of vehicle object image are obtained and represented as image). Additionally, the rational and motivation to combine the references Chen and Tang as applied in rejection of claim 1 apply to this claim.
Regarding Claim 6,
The combination of Chen and Tang further discloses wherein the textured representation is determined taking account of a distortion of the scans which follows from a perspective of the motor vehicle relative to the scanned object. (Tang, [0006], [0015], Fig. 1, discloses the functional units comprise at least an image processor configured for segmenting one or more input vehicle images; a texture encoder configured for generating texture feature maps from the segmented vehicle images; a frame extractor configured for extracting vehicle frame images from the segmented vehicle images; an intact-vehicle frame (IVF) reconstructor configured for reconstructing IVF images from vehicle frame images extracted by the frame extractor; an image reconstructor configured for reconstructing intact-vehicle images from reconstructed IVF images generated by the IVF reconstructor and texture feature maps generated by the texture encoder; an anomaly locator configured for comparing processed vehicle images and their corresponding reconstructed intact-vehicle images to detect anomaly regions; a frame encoder discriminator configured for generating a score of likelihood between two IVF feature maps; and an image discriminator configured for generating a score of likelihood between the input vehicle image and the reconstructed vehicle image; functional units of an apparatus for image-based anomaly detection for vehicle damage assessment in accordance with one embodiment of the present invention. The apparatus may comprise an image processor 101 configured for performing various imaging processing techniques such as image segmentation and augmentation on input images of vehicles; a texture encoder 102 configured for generating texture feature maps from vehicle images processed by the image processor 101; a frame extractor 103 configured for extracting vehicle frame images from vehicle images processed by the image processor 101; an IVF reconstructor 104 configured for reconstructing IVF images from vehicle frame images extracted by the frame extractor 103; an image reconstructor 105 configured for reconstructing intact-vehicle images from the reconstructed IVF images generated by the IVF reconstructor 104 and texture feature maps generated by the texture encoder 102; and an anomaly locator 106 configured for comparing processed vehicle images and their corresponding reconstructed intact-vehicle images to detect anomaly regions. The IVF reconstructor 104 may include a frame encoder 1041 configured for mapping extracted vehicle frame images to generate IVF feature maps and a frame decoder 1402 configured for remapping IVF feature maps back to IVF images; texture feature maps of vehicle object image are obtained and represented as image and image parts where the damaged parts are skewed (distorted) are compared with trained stored image dataset to determine if they are distorted and segmented as anomaly and inpainted accordingly). Additionally, the rational and motivation to combine the references Chen and Tang as applied in rejection of claim 1 apply to this claim.
Regarding Claim 7,
The combination of Chen and Tang further discloses wherein the textured representation is determined taking account of a distortion of the scans which follows from a perspective of the motor vehicle relative to the scanned object. (Tang, [0006], [0015], Fig. 1, discloses the functional units comprise at least an image processor configured for segmenting one or more input vehicle images; a texture encoder configured for generating texture feature maps from the segmented vehicle images; a frame extractor configured for extracting vehicle frame images from the segmented vehicle images; an intact-vehicle frame (IVF) reconstructor configured for reconstructing IVF images from vehicle frame images extracted by the frame extractor; an image reconstructor configured for reconstructing intact-vehicle images from reconstructed IVF images generated by the IVF reconstructor and texture feature maps generated by the texture encoder; an anomaly locator configured for comparing processed vehicle images and their corresponding reconstructed intact-vehicle images to detect anomaly regions; a frame encoder discriminator configured for generating a score of likelihood between two IVF feature maps; and an image discriminator configured for generating a score of likelihood between the input vehicle image and the reconstructed vehicle image; functional units of an apparatus for image-based anomaly detection for vehicle damage assessment in accordance with one embodiment of the present invention. The apparatus may comprise an image processor 101 configured for performing various imaging processing techniques such as image segmentation and augmentation on input images of vehicles; a texture encoder 102 configured for generating texture feature maps from vehicle images processed by the image processor 101; a frame extractor 103 configured for extracting vehicle frame images from vehicle images processed by the image processor 101; an IVF reconstructor 104 configured for reconstructing IVF images from vehicle frame images extracted by the frame extractor 103; an image reconstructor 105 configured for reconstructing intact-vehicle images from the reconstructed IVF images generated by the IVF reconstructor 104 and texture feature maps generated by the texture encoder 102; and an anomaly locator 106 configured for comparing processed vehicle images and their corresponding reconstructed intact-vehicle images to detect anomaly regions. The IVF reconstructor 104 may include a frame encoder 1041 configured for mapping extracted vehicle frame images to generate IVF feature maps and a frame decoder 1402 configured for remapping IVF feature maps back to IVF images; texture feature maps of vehicle object image are obtained and represented as image and image parts where the damaged parts are skewed (distorted) are compared with trained stored image dataset to determine if they are distorted and segmented as anomaly and inpainted accordingly). Additionally, the rational and motivation to combine the references Chen and Tang as applied in rejection of claim 1 apply to this claim.
Regarding Claim 8,
The combination of Chen and Tang further discloses wherein a geographical position of the motor vehicle is determined; Chen, Description, discloses vehicle can be a passenger vehicle and / or a utility vehicle, for example a land vehicle, an industrial vehicle, an industrial machine, a vehicle for a swap body, a mobile robot and / or an automated driverless transport system; primary sensor device is, for example, a distance sensor, for example a radar sensor, a lidar sensor and / or an ultrasonic sensor. The first environment data generated by the primary sensor device typically contain position data of one or more objects detected by the primary sensor device in the environment of the vehicle; For example, in the event that the primary sensor device comprises a radar sensor, a radio frequency signal (hereinafter: RF signal) is sent from a transmitter of the radar sensor. The RF signal is scattered back or reflected on an object in the vicinity of the vehicle and propagates back to a receiver of the radar sensor. The period in which the RF signal is propagated between the time of transmission and reception is generally known in the technical field as the time of flight (ToF). The position of the object reflecting the RF signal can typically be derived from the flight time in the form of determine an elevation angle, a side angle and a distance; if the primary sensor device comprises a lidar sensor, a light signal can be sent from a transmitter of the lidar sensor. The light signal is scattered or reflected back on an object located in the vicinity of the vehicle and propagates back to a receiver of the lidar sensor. The period of time in which the light signal is propagated between the time of transmission and reception is also known as the time of flight. The position data of detected objects can also be determined from the flight time; geographical position of vehicle in the environment is determined) and wherein the scanned object is recognized based on map data in a region of the geographical position. (Tang, [0006], discloses the functional units comprise at least an image processor configured for segmenting one or more input vehicle images; a texture encoder configured for generating texture feature maps from the segmented vehicle images; a frame extractor configured for extracting vehicle frame images from the segmented vehicle images; an intact-vehicle frame (IVF) reconstructor configured for reconstructing IVF images from vehicle frame images extracted by the frame extractor; an image reconstructor configured for reconstructing intact-vehicle images from reconstructed IVF images generated by the IVF reconstructor and texture feature maps generated by the texture encoder; an anomaly locator configured for comparing processed vehicle images and their corresponding reconstructed intact-vehicle images to detect anomaly regions; a frame encoder discriminator configured for generating a score of likelihood between two IVF feature maps; and an image discriminator configured for generating a score of likelihood between the input vehicle image and the reconstructed vehicle image; texture feature maps of vehicle object image are obtained and represented as image). Additionally, the rational and motivation to combine the references Chen and Tang as applied in rejection of claim 1 apply to this claim.
Regarding Claim 9,
The combination of Chen and Tang further discloses wherein a geographical position of the motor vehicle is determined; (Chen, Description, discloses vehicle can be a passenger vehicle and / or a utility vehicle, for example a land vehicle, an industrial vehicle, an industrial machine, a vehicle for a swap body, a mobile robot and / or an automated driverless transport system; primary sensor device is, for example, a distance sensor, for example a radar sensor, a lidar sensor and / or an ultrasonic sensor. The first environment data generated by the primary sensor device typically contain position data of one or more objects detected by the primary sensor device in the environment of the vehicle; For example, in the event that the primary sensor device comprises a radar sensor, a radio frequency signal (hereinafter: RF signal) is sent from a transmitter of the radar sensor. The RF signal is scattered back or reflected on an object in the vicinity of the vehicle and propagates back to a receiver of the radar sensor. The period in which the RF signal is propagated between the time of transmission and reception is generally known in the technical field as the time of flight (ToF). The position of the object reflecting the RF signal can typically be derived from the flight time in the form of determine an elevation angle, a side angle and a distance; if the primary sensor device comprises a lidar sensor, a light signal can be sent from a transmitter of the lidar sensor. The light signal is scattered or reflected back on an object located in the vicinity of the vehicle and propagates back to a receiver of the lidar sensor. The period of time in which the light signal is propagated between the time of transmission and reception is also known as the time of flight. The position data of detected objects can also be determined from the flight time; geographical position of vehicle in the environment is determined) and wherein the scanned object is recognized based on map data in a region of the geographical position. (Tang, [0006], discloses the functional units comprise at least an image processor configured for segmenting one or more input vehicle images; a texture encoder configured for generating texture feature maps from the segmented vehicle images; a frame extractor configured for extracting vehicle frame images from the segmented vehicle images; an intact-vehicle frame (IVF) reconstructor configured for reconstructing IVF images from vehicle frame images extracted by the frame extractor; an image reconstructor configured for reconstructing intact-vehicle images from reconstructed IVF images generated by the IVF reconstructor and texture feature maps generated by the texture encoder; an anomaly locator configured for comparing processed vehicle images and their corresponding reconstructed intact-vehicle images to detect anomaly regions; a frame encoder discriminator configured for generating a score of likelihood between two IVF feature maps; and an image discriminator configured for generating a score of likelihood between the input vehicle image and the reconstructed vehicle image; texture feature maps of vehicle object image are obtained and represented as image). Additionally, the rational and motivation to combine the references Chen and Tang as applied in rejection of claim 1 apply to this claim.
Regarding Claim 10,
The combination of Chen and Tang further discloses wherein a texture on one section of the scanned object, with respect to which no scan is present, is determined based on a texture on another section of the scanned object. (Tang, [0006], [0016-0018], [0022], the functional units comprise at least an image processor configured for segmenting one or more input vehicle images; a texture encoder configured for generating texture feature maps from the segmented vehicle images; a frame extractor configured for extracting vehicle frame images from the segmented vehicle images; an intact-vehicle frame (IVF) reconstructor configured for reconstructing IVF images from vehicle frame images extracted by the frame extractor; an image reconstructor configured for reconstructing intact-vehicle images from reconstructed IVF images generated by the IVF reconstructor and texture feature maps generated by the texture encoder; an anomaly locator configured for comparing processed vehicle images and their corresponding reconstructed intact-vehicle images to detect anomaly regions; a frame encoder discriminator configured for generating a score of likelihood between two IVF feature maps; and an image discriminator configured for generating a score of likelihood between the input vehicle image and the reconstructed vehicle image; texture feature maps of vehicle object image are obtained and represented as image). discloses the frame encoder 1041 may comprise a stack of convolutional layers configured to extract multi-scale context information represented in an IVF feature map. The frame decoder 1042 may comprise a stack of deconvolutional layers to reconstruct an intact vehicle frame from the IVF feature map. Skip connections may be used to connect the convolutional layers of the frame encoder 1041 to the deconvolution layers of the frame decoder 1042 such that fine details and reconstruction locations of edges in the vehicle frame at pixel level can be better preserved; the texture encoder 102, frame extractor 103, IVF reconstructor 104, image reconstructor 105, and anomaly locator 106 are generative convolutional neural networks (generators) which are trained under a generative contextualized adversarial network (GCAN) model with a training dataset containing a plurality of images of undamaged vehicles, IVF images and augmented vehicle frame images of the vehicles. The trained generators are then used to perform the anomaly detection in the training of the generators, the apparatus further includes: a frame encoder discriminator 107 configured for generating a score of likelihood between two IVF feature maps and an image discriminator 108 configured for generating a score of likelihood between an input vehicle image and a corresponding reconstructed vehicle image. In one embodiment, the frame encoder discriminator 107 and image discriminator 108 may be discriminative convolutional neural networks. The frame encoder discriminator 107 and image discriminator 108 can also be trained via the GCAN model; [0022] S203: perform a damage augmentation to the IVF image D201 to generate a damage-augmented vehicle frame image D203 by the image processor 101; wherein in one embodiment, the damage augmentation comprises modifying one or more randomly selected regions in the vehicle frame image D201, and wherein the modification includes one or more of random translation, random rotation, random scaling, random line addition, noise addition, partial occlusion, and region substitution by replacing the selected region with a dissimilar region of another vehicle frame image; various sections in vehicle object image are scanned and analyzed using machine learning and edges of sections of textures are determined to detect anomaly in sections of images that are different and remaining pixels are textured the same other than the anomaly regions if scans are not available of that section and masked similar color pixels and regions of image are substituted by dissimilar regions from other image). Additionally, the rational and motivation to combine the references Chen and Tang as applied in rejection of claim 1 apply to this claim.
Regarding Claim 11,
The combination of Chen and Tang further discloses wherein a texture on one section of the scanned object, with respect to which no scan is present, is determined based on a texture on another section of the scanned object. (Tang, [0006], [0016-0018], [0022], discloses the frame encoder 1041 may comprise a stack of convolutional layers configured to extract multi-scale context information represented in an IVF feature map. The frame decoder 1042 may comprise a stack of deconvolutional layers to reconstruct an intact vehicle frame from the IVF feature map. Skip connections may be used to connect the convolutional layers of the frame encoder 1041 to the deconvolution layers of the frame decoder 1042 such that fine details and reconstruction locations of edges in the vehicle frame at pixel level can be better preserved; the texture encoder 102, frame extractor 103, IVF reconstructor 104, image reconstructor 105, and anomaly locator 106 are generative convolutional neural networks (generators) which are trained under a generative contextualized adversarial network (GCAN) model with a training dataset containing a plurality of images of undamaged vehicles, IVF images and augmented vehicle frame images of the vehicles. The trained generators are then used to perform the anomaly detection in the training of the generators, the apparatus further includes: a frame encoder discriminator 107 configured for generating a score of likelihood between two IVF feature maps and an image discriminator 108 configured for generating a score of likelihood between an input vehicle image and a corresponding reconstructed vehicle image. In one embodiment, the frame encoder discriminator 107 and image discriminator 108 may be discriminative convolutional neural networks. The frame encoder discriminator 107 and image discriminator 108 can also be trained via the GCAN model; [0022] S203: perform a damage augmentation to the IVF image D201 to generate a damage-augmented vehicle frame image D203 by the image processor 101; wherein in one embodiment, the damage augmentation comprises modifying one or more randomly selected regions in the vehicle frame image D201, and wherein the modification includes one or more of random translation, random rotation, random scaling, random line addition, noise addition, partial occlusion, and region substitution by replacing the selected region with a dissimilar region of another vehicle frame image; various sections in vehicle object image are scanned and analyzed using machine learning and edges of sections of textures are determined to detect anomaly in sections of images that are different and remaining pixels are textured the same other than the anomaly regions if scans are not available of that section and masked similar color pixels and regions of images are substituted with dissimilar regions from other image). Additionally, the rational and motivation to combine the references Chen and Tang as applied in rejection of claim 1 apply to this claim.
Regarding Claim 12,
The combination of Chen and Tang further discloses wherein the texture on the section is determined by machine learning. (Tang, [0004], discloses a problem with the aforesaid neural network-based approaches is that they require a large number of annotated images of damaged vehicles or vehicle components as training datasets for machine learning. As images of vehicle damages are relatively scarce and damages are too varied (e.g., there are no typical damages), the training datasets based on images of damaged vehicles are always insufficient to achieve a detection capability to detect unknown/unseen vehicle damages of stochastic types and extents, such as shape deformation of vehicle frame, from images which are taken in various contexts. Thus, there is a need in the field for a better approach; texture of image section of object is determined by machine learning algorithm). Additionally, the rational and motivation to combine the references Chen and Tang as applied in rejection of claim 1 apply to this claim.
Regarding Claim 13,
The combination of Chen and Tang further discloses wherein the texture on the section is determined by inpainting. (Tang, [0004], [0022], discloses a problem with the aforesaid neural network-based approaches is that they require a large number of annotated images of damaged vehicles or vehicle components as training datasets for machine learning. As images of vehicle damages are relatively scarce and damages are too varied (e.g., there are no typical damages), the training datasets based on images of damaged vehicles are always insufficient to achieve a detection capability to detect unknown/unseen vehicle damages of stochastic types and extents, such as shape deformation of vehicle frame, from images which are taken in various contexts. Thus, there is a need in the field for a better approach; texture of image section of object is determined by machine learning algorithm). (Tang, [0006], [0015], Fig. 1, discloses the functional units comprise at least an image processor configured for segmenting one or more input vehicle images; a texture encoder configured for generating texture feature maps from the segmented vehicle images; a frame extractor configured for extracting vehicle frame images from the segmented vehicle images; an intact-vehicle frame (IVF) reconstructor configured for reconstructing IVF images from vehicle frame images extracted by the frame extractor; an image reconstructor configured for reconstructing intact-vehicle images from reconstructed IVF images generated by the IVF reconstructor and texture feature maps generated by the texture encoder; an anomaly locator configured for comparing processed vehicle images and their corresponding reconstructed intact-vehicle images to detect anomaly regions; a frame encoder discriminator configured for generating a score of likelihood between two IVF feature maps; and an image discriminator configured for generating a score of likelihood between the input vehicle image and the reconstructed vehicle image; functional units of an apparatus for image-based anomaly detection for vehicle damage assessment in accordance with one embodiment of the present invention. The apparatus may comprise an image processor 101 configured for performing various imaging processing techniques such as image segmentation and augmentation on input images of vehicles; a texture encoder 102 configured for generating texture feature maps from vehicle images processed by the image processor 101; a frame extractor 103 configured for extracting vehicle frame images from vehicle images processed by the image processor 101; an IVF reconstructor 104 configured for reconstructing IVF images from vehicle frame images extracted by the frame extractor 103; an image reconstructor 105 configured for reconstructing intact-vehicle images from the reconstructed IVF images generated by the IVF reconstructor 104 and texture feature maps generated by the texture encoder 102; and an anomaly locator 106 configured for comparing processed vehicle images and their corresponding reconstructed intact-vehicle images to detect anomaly regions. The IVF reconstructor 104 may include a frame encoder 1041 configured for mapping extracted vehicle frame images to generate IVF feature maps and a frame decoder 1402 configured for remapping IVF feature maps back to IVF images; texture feature maps of vehicle object image are obtained and represented as image and image parts where the damaged parts are skewed (distorted) are compared with trained stored image dataset to determine if they are distorted and segmented as anomaly and inpainted accordingly). Additionally, the rational and motivation to combine the references Chen and Tang as applied in rejection of claim 1 apply to this claim.
Regarding Claim 14,
The combination of Chen and Tang further discloses providing map data which comprise the scanned object. (Tang, [0006], [0015], Fig. 1, discloses the functional units comprise at least an image processor configured for segmenting one or more input vehicle images; a texture encoder configured for generating texture feature maps from the segmented vehicle images; a frame extractor configured for extracting vehicle frame images from the segmented vehicle images; an intact-vehicle frame (IVF) reconstructor configured for reconstructing IVF images from vehicle frame images extracted by the frame extractor; an image reconstructor configured for reconstructing intact-vehicle images from reconstructed IVF images generated by the IVF reconstructor and texture feature maps generated by the texture encoder; an anomaly locator configured for comparing processed vehicle images and their corresponding reconstructed intact-vehicle images to detect anomaly regions; a frame encoder discriminator configured for generating a score of likelihood between two IVF feature maps; and an image discriminator configured for generating a score of likelihood between the input vehicle image and the reconstructed vehicle image; functional units of an apparatus for image-based anomaly detection for vehicle damage assessment in accordance with one embodiment of the present invention. The apparatus may comprise an image processor 101 configured for performing various imaging processing techniques such as image segmentation and augmentation on input images of vehicles; a texture encoder 102 configured for generating texture feature maps from vehicle images processed by the image processor 101; a frame extractor 103 configured for extracting vehicle frame images from vehicle images processed by the image processor 101; an IVF reconstructor 104 configured for reconstructing IVF images from vehicle frame images extracted by the frame extractor 103; an image reconstructor 105 configured for reconstructing intact-vehicle images from the reconstructed IVF images generated by the IVF reconstructor 104 and texture feature maps generated by the texture encoder 102; and an anomaly locator 106 configured for comparing processed vehicle images and their corresponding reconstructed intact-vehicle images to detect anomaly regions. The IVF reconstructor 104 may include a frame encoder 1041 configured for mapping extracted vehicle frame images to generate IVF feature maps and a frame decoder 1402 configured for remapping IVF feature maps back to IVF images; texture feature maps of vehicle object image are obtained and represented as image and image parts where the damaged parts are skewed (distorted) are compared with trained stored image dataset to determine if they are distorted and segmented as anomaly and inpainted accordingly). Additionally, the rational and motivation to combine the references Chen and Tang as applied in rejection of claim 1 apply to this claim.
Regarding Claim 15,
The combination of Chen and Tang further discloses providing map data which comprise the scanned object. (Tang, [0006], discloses the functional units comprise at least an image processor configured for segmenting one or more input vehicle images; a texture encoder configured for generating texture feature maps from the segmented vehicle images; a frame extractor configured for extracting vehicle frame images from the segmented vehicle images; an intact-vehicle frame (IVF) reconstructor configured for reconstructing IVF images from vehicle frame images extracted by the frame extractor; an image reconstructor configured for reconstructing intact-vehicle images from reconstructed IVF images generated by the IVF reconstructor and texture feature maps generated by the texture encoder; an anomaly locator configured for comparing processed vehicle images and their corresponding reconstructed intact-vehicle images to detect anomaly regions; a frame encoder discriminator configured for generating a score of likelihood between two IVF feature maps; and an image discriminator configured for generating a score of likelihood between the input vehicle image and the reconstructed vehicle image; texture feature maps of vehicle object image are obtained and represented as image). Additionally, the rational and motivation to combine the references Chen and Tang as applied in rejection of claim 1 apply to this claim.
Regarding Claim 16,
The combination of Chen and Tang further discloses wherein the map data are communicated to a plurality of motor vehicles. (Tang, [0006], discloses the functional units comprise at least an image processor configured for segmenting one or more input vehicle images; a texture encoder configured for generating texture feature maps from the segmented vehicle images; a frame extractor configured for extracting vehicle frame images from the segmented vehicle images; an intact-vehicle frame (IVF) reconstructor configured for reconstructing IVF images from vehicle frame images extracted by the frame extractor; an image reconstructor configured for reconstructing intact-vehicle images from reconstructed IVF images generated by the IVF reconstructor and texture feature maps generated by the texture encoder; an anomaly locator configured for comparing processed vehicle images and their corresponding reconstructed intact-vehicle images to detect anomaly regions; a frame encoder discriminator configured for generating a score of likelihood between two IVF feature maps; and an image discriminator configured for generating a score of likelihood between the input vehicle image and the reconstructed vehicle image; texture feature maps of vehicle object image are obtained and represented as image). Additionally, the rational and motivation to combine the references Chen and Tang as applied in rejection of claim 1 apply to this claim.
Claims 17-19 recite system with elements corresponding to the method steps recited in Claims 1-2 and 13 respectively. Therefore, the recited elements of the system claims 17-19 are mapped to the proposed combination in the same manner as the corresponding steps of Claims 1-2 and 13 respectively. Additionally, the rationale and motivation to combine the Chen and Tang references presented in rejection of Claim 1, apply to these claims.
Furthermore, the combination of Chen and Tang further discloses A system comprising: a scanning device on board a motor vehicle, wherein the scanning device is configured for scanning an environment of the motor vehicle; and a central station configured to receive a scan (Tang. [0022], discloses S203: perform a damage augmentation to the IVF image D201 to generate a damage-augmented vehicle frame image D203 by the image processor 101; wherein in one embodiment, the damage augmentation comprises modifying one or more randomly selected regions in the vehicle frame image D201, and wherein the modification includes one or more of random translation, random rotation, random scaling, random line addition, noise addition, partial occlusion, and region substitution by replacing the selected region with a dissimilar region of another vehicle frame image).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
US-20160325680-A1 ([0109] Digitally removing the static object functions to remove the visual obstruction from the video frame. In a first variation, digitally removing the static object includes: segmenting the video frame into a foreground and background, and retaining the background. In a second variation, digitally removing the static object includes: treating the region of the video frame occupied by the static object as a lost or corrupted part of the frame, and using image interpolation or video interpolation to reconstruct the obstructed portion of the background (e.g., using structural inpainting, textural inpainting, etc.). In a third variation, digitally removing the static object includes: identifying the pixels displaying the static object and removing the pixels from the video frame)
US-20140300566-A1 ([0089] Referring to FIG. 4A, operations S401 to S407 are the same as operations S201 to S207 of FIG. 2, and thus will not be described. The 3D image conversion apparatus 100 processes a region occluded by a front object by performing inpainting with respect to objects after obtaining depth information in operation S409. Herein, inpainting may correspond to an operation of reconstructing a part of an image if the part of the image is lost or distorted. Inpainting may correspond to an operation of reconstructing a region occluded by an object if a viewpoint is changed for a 3D image. Inpainting may be performed by copying a texture of a part around a part to be reconstructed and pasting the copied texture to the part to be reconstructed. For example, the 3D image conversion apparatus 100 may perform inpainting by copying a texture of a part around a part occluded by a particular object and pasting the copied texture to the occluded part. The 3D image conversion apparatus 100 provides the inpainting-processed 3D image upon receiving a viewpoint change input, and thus the user may see the 3D image from various viewpoints. The 3D image conversion apparatus 100 generates and displays the 3D image based on depth information and inpainting results in operation S411)
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PINALBEN V PATEL whose telephone number is (571)270-5872. The examiner can normally be reached M-F: 10am - 8pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chineyere Wills-Burns can be reached at 571-272-9752. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Pinalben Patel/Examiner, Art Unit 2673