DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendments
In the preliminary remarks and amendments received on February 13th, 2024, claims 1-20 are canceled and claims 21-40 are added. Accordingly, claims 21-40 are currently pending for examination in the Application No. 18/505,523 filed November 9th, 2023.
Applicant’s preliminary amendments filed November 9th, 2023, to the Specification have been entered.
Priority
Acknowledgment is made of applicant’s status as a continuation (CON) of Patent Application No. 17/079,838 filed on October 26th, 2020, now U.S. Patent 11,816,823, which is a CON of Patent Application No. 16/695,596, filed on November 26th, 2019, now U.S. Patent 10,818,002.
Information Disclosure Statement
The following item(s) of information in the information disclosure statement (IDS) filed November 9th, 2023, fails/fail to comply with 37 CFR 1.98(a)(2), which requires a legible copy of each cited foreign patent document, each cited non-patent literature publication or that portion which caused it to be listed, and all other information or that portion which caused the item(s) to be listed: foreign patent document no. 1 (CN 103777852) and non-patent literature document no. 1 (Gao et al., “Vehicle make recognition based on convolutional neural network,” 2015). Accordingly, this/these item(s) of information in the attached IDS is/are not being considered by the examiner.
Double Patenting
A rejection based on double patenting of the “same invention” type finds its support in the language of 35 U.S.C. 101 which states that “whoever invents or discovers any new and useful process... may obtain a patent therefor...” (Emphasis added). Thus, the term “same invention,” in this context, means an invention drawn to identical subject matter. See Miller v. Eagle Mfg. Co., 151 U.S. 186 (1894); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Ockert, 245 F.2d 467, 114 USPQ 330 (CCPA 1957).
A statutory type (35 U.S.C. 101) double patenting rejection can be overcome by canceling or amending the claims that are directed to the same invention so they are no longer coextensive in scope. The filing of a terminal disclaimer cannot overcome a double patenting rejection based upon 35 U.S.C. 101.
Claims 21-40 are rejected under 35 U.S.C. 101 as claiming the same invention as that of claims 21-40 of prior U.S. Patent Nos. 10,818,002 and 11,816,823. This is a statutory double patenting rejection. Although the claims at issue are not identical, they are not patentably distinct from each other because the scope of claims 1-20 in the reference patents obviously teach claims 21-40 of the instant application.
Claim
Instant Application (18/505,523)
Claim
U.S. Patent 10,818,002
(App. No. 16/695,596)
21
A computer-implemented method for processing images, the method comprising:
1
A computer-implemented method for processing images, the method comprising:
21
obtaining, by one or more processors, at least one image for analyzing, wherein the at least one image depicts at least one object;
1
obtaining, by one or more processors, at least one image for analyzing, wherein the at least one image depicts at least one object;
21
inputting, by the one or more processors, the at least one image to at least one of a plurality of image plugins;
1
…inputting, by the one or more processors, the at least one image to at least one of a plurality of image plugins;
21
analyzing, by the one or more processors, the at least one image via the at least one of the plurality of image plugins, wherein the at least one of the plurality of image plugins is configured to identify one or more different aspects of the at least one object depicted in the at least one image;
1
analyzing, by the one or more processors, the at least one image concurrently via each of at least two of the plurality of image plugins, wherein the at least two of the plurality of image plugins are configured to identify one or more different aspects of the same at least one object depicted in the at least one image;
Where the “at least two of the plurality of image plugins” includes “at least one of the plurality of image plugins”.
21
determining, by the one or more processors, metadata related to the at least one image based on the at least one of the plurality of image plugins;
1
determining, by the one or more processors, metadata related to the at least one image based on the at least two of the plurality of image plugins;
Where the “at least two of the plurality of image plugins” includes “at least one of the plurality of image plugins”.
21
sorting, by the one or more processors, the at least one image to generate at least one sorted image; and
1
…sorting, by the one or more processors, the at least one filtered image to generate at least one sorted image;
21
displaying, by the one or more processors, the at least one sorted image.
1
…displaying, by the one or more processors, the at least one sorted image… .
22
The computer-implemented method of claim 21 further including:
replacing, by the one or more processors, based on a user selection, the at least one sorted image with an enlarged image of one of two or more thumbnail images.
10
The method of claim 9, wherein the at least one object is a vehicle, and the one or more different aspects of the same at least one object correspond to a side view, front view, or rear view of the vehicle, the method further comprising:
replacing, by the one or more processors, based on a user selection, the at least one object in the first display area with an enlarged image of one of the two or more thumbnail images.
Where the “at least one object in the first display area” is the “at least one sorted image” as disclosed in claim 9:
9. The method of claim 1, wherein the displaying, by the one or more processors, the at least one sorted image according to the user interaction with the navigation controls on the webpage further comprises:
displaying the at least one object in a first display area;… .
23
The computer-implemented method of claim 21, wherein displaying the at least one sorted image is based on one or more predetermined criteria.
1
…displaying, by the one or more processors, the at least one sorted image according to a user interaction with the navigation controls on the webpage.
Where the “user interaction” is “one or more predetermined criteria” predetermined by the user.
24
The computer-implemented method of claim 23, wherein the predetermined criteria includes one or more different aspects of the at least one object, the one or more different aspects comprising a side view, front view, or rear view of the sorted object.
9
The method of claim 1, wherein the displaying, by the one or more processors, the at least one sorted image according to the user interaction with the navigation controls on the webpage further comprises:
displaying the at least one object in a first display area; and
displaying two or more thumbnail images in a second image display area, wherein each of the two or more thumbnail images depicts the one or more different aspects of the same at least one object.
10
The method of claim 9, wherein the at least one object is a vehicle, and the one or more different aspects of the same at least one object correspond to a side view, front view, or rear view of the vehicle, the method further comprising:
replacing, by the one or more processors, based on a user selection, the at least one object in the first display area with an enlarged image of one of the two or more thumbnail images.
Where the “displaying… the at least one image according to the user interaction” includes displaying images depicting “one or more different aspects of the same at least one object” is the “predetermined criteria” (i.e., the ”user selection”) including “one or more different aspects”.
25
The computer-implemented method of claim 24, further including:
analyzing the at least one image via at least two of the plurality of image plugins; and
after the analyzing the at least one image via each of the at least two image plugins, removing the at least one image from a queue of images.
4
The computer-implemented method of claim 3, the method further including:
analyzing the at least one image via the at least two of the plurality of image plugins; and
after the analyzing the at least one image via each of the at least two image plugins, removing the at least one image from the queue of images.
26
The computer-implemented method of claim 21, further including:
analyzing at least two or more images stored in a queue of images via at least two of the plurality of image plugins.
3
The computer-implemented method of claim 1, wherein at least two or more images are stored in the queue of images and wherein the plurality of image plugins are arranged in a centralized location, the method further including:
analyzing, in parallel, the at least two or more images stored in the queue of images via the at least two of the plurality of image plugins.
27
The computer-implemented method of claim 21, further including:
analyzing the at least one image in parallel via the plurality of image plugins.
2
The computer-implemented method of claim 1, wherein the plurality of image plugins are arranged in distributed locations and the method further includes:
analyzing the at least one image in parallel via the plurality of image plugins.
28
The computer-implemented method of claim 21, further including:
filtering, by the one or more processors, the at least one image based on one or more rule sets to generate at least one filtered image.
1
A computer-implemented method for processing images, the method comprising:
…filtering, by the one or more processors, the at least one image based on one or more rule sets to generate at least one filtered image;…
29
The computer-implemented method of claim 28, wherein the one or more rule sets includes the metadata related to the at least one image.
7
The computer-implemented method of claim 1, where the one or more rule sets includes the metadata related to the at least one image.
30
The computer-implemented method of claim 23, wherein analyzing the at least one image via the at least one of the plurality of image plugins utilizes at least one object detection model or image recognition model.
8
The computer-implemented method of claim 1, wherein analyzing the at least one image via the at least two of the plurality of image plugins utilizes at least one object detection model or image recognition model.
31
A computer system for processing images, the computer system comprising:
11
A computer system for processing images, the computer system comprising:
31
a memory having processor-readable instructions stored therein; and
at least one processor configured to access the memory and execute the processor-readable instructions, which when executed by the at least one processor configures the at least one processor to perform a plurality of functions, including functions for:
11
a memory having processor-readable instructions stored therein; and
at least one processor configured to access the memory and execute the processor-readable instructions, which when executed by the at least one processor configures the at least one processor to perform a plurality of functions, including functions for:
31
obtaining at least one image for analyzing, wherein the at least one image depicts at least one object;
11
obtaining at least one image, wherein the at least one image depicts at least one object; …
31
inputting the at least one image to at least one of a plurality of image plugins;
analyzing the at least one image via the at least one of the plurality of image plugins, wherein the at least one of the plurality of image plugins is configured to identify one or more different aspects of the at least one object depicted in the at least one image;
11
analyzing the at least one image concurrently via each of at least two of a plurality of image plugins, wherein the at least two of the plurality of image plugins are configured to identify one or more different aspects of the same object depicted in the at least one image;
Where the “at least two of the plurality of image plugins” includes “at least one of the plurality of image plugins”.
Where “analyzing the at least one image concurrently via each of at least two of a plurality of image plugins” in claim 11 includes “inputting the at least one image to at least one of a plurality of image plugins” as disclosed in claim 1:
1. …inputting, by the one or more processors, the at least one image to at least one of a plurality of image plugins;
analyzing, by the one or more processors, the at least one image concurrently via each of at least two of the plurality of image plugins, wherein the at least two of the plurality of image plugins are configured to identify one or more different aspects of the same at least one object depicted in the at least one image; …
31
determining metadata related to the at least one image based on the at least one of the plurality of image plugins;
11
determining metadata related to the at least one image based on the analyzing the at least one image; …
31
sorting the at least one image to generate at least one sorted image; and
11
sorting the at least one filtered image to generate at least one sorted image;
31
displaying the at least one sorted image, wherein the displaying is based on a predetermined order.
11
displaying the at least one sorted image based on an organizational sequence of a webpage;
Where the “organizational sequence” is a “predetermined order”.
32
The computer system of claim 31, wherein the functions further include:
replacing based on a user selection, the at least one sorted image with an enlarged image of one of two or more thumbnail images.
19
The system of claim 18, wherein the at least one object is a vehicle, and the one or more different aspects of the same at least one object correspond to a side view, front view, or rear view of the vehicle, and wherein the functions further include:
replacing, based on a user selection, the at least one object in the first display area with an enlarged image of one of the two or more thumbnail images.
Where the “at least one object in the first display area” is the “at least one sorted image” as disclosed in claim 18:
18. The system of claim 11, wherein the displaying, by the one or more processors, the at least one sorted image according to the user interaction with the navigation controls on the webpage further comprises:
displaying an object in a first display area; … .
33
The computer system of claim 31, wherein displaying the at least one sorted image is based on one or more predetermined criteria.
11
A computer system for processing images, the computer system comprising:
…filtering the at least one image based on a predetermined metadata to generate at least one filtered image;
sorting the at least one filtered image to generate at least one sorted image;
displaying the at least one sorted image… .
Where the “at least one sorted image” being displayed is generated by “filtering the at least one image based on a predetermined metadata” is “displaying the at least one sorted image is based on one or more predetermined criteria”.
34
The computer system of claim 33, wherein the predetermined criteria includes one or more different aspects of the at least one object, the one or more different aspects comprising a side view, front view, or rear view of the sorted object.
19
The system of claim 18, wherein the at least one object is a vehicle, and the one or more different aspects of the same at least one object correspond to a side view, front view, or rear view of the vehicle, and wherein the functions further include:…
Where the “one or more different aspects” is “predetermined criteria” as the “one or more different aspects” is identified prior to being based upon in the displaying of the at least one sorted image as disclosed in claim 11:
11. …analyzing the at least one image concurrently via each of at least two of a plurality of image plugins, wherein the at least two of the plurality of image plugins are configured to identify one or more different aspects of the same object depicted in the at least one image; … .
35
The computer system of claim 34, wherein the functions further include analyzing the at least one image via at least two of the plurality of image plugins; and
after the analyzing the at least one image via each of the at least two image plugins, removing the at least one image from a queue of images.
14
The computer system of claim 13, wherein the functions further include analyzing the at least one image via each of the at least two image plugins, and
after the analyzing the at least one image via each of the at least two image plugins, removing the at least one image from the queue of images.
36
The computer system of claim 31, wherein the functions further include:
analyzing at least two or more images stored in a queue of images via at least two of the plurality of image plugins.
11
…storing the at least one image in a queue of images;
analyzing the at least one image concurrently via each of at least two of a plurality of image plugins…; …
Where analyzing the “at least one image in a queue of images” via ”each of at least two of a plurality of image plugins” is “analyzing at least two or more images stored in a queue of images via at least two of the plurality of image plugins”.
37
The computer system of claim 31, wherein the functions further include:
analyzing the at least one image in parallel via the plurality of image plugins.
11
…analyzing the at least one image concurrently via each of at least two of a plurality of image plugins, wherein the at least two of the plurality of image plugins are configured to identify one or more different aspects of the same object depicted in the at least one image; …
Where “concurrently” is “in parallel”.
38
The computer system of claim 31, wherein the functions further include:
filtering the at least one image based on one or more rule sets to generate at least one filtered image.
11
…filtering the at least one image based on a predetermined metadata to generate at least one filtered image; …
Where “predetermined metadata” is “one or more rule sets”.
39
The computer system of claim 38, wherein the one or more rule sets includes the metadata related to the at least one image.
11
…filtering the at least one image based on a predetermined metadata to generate at least one filtered image; …
Where “predetermined metadata” is “one or more rule sets”.
40
A computer-implemented method for processing images, the method comprising:
11
A computer system for processing images, the computer system comprising:
…
at least one processor configured to access the memory and execute the processor-readable instructions, which when executed by the at least one processor configures the at least one processor to perform a plurality of functions, including functions for:
40
obtaining, by one or more processors, at least one image for analyzing, wherein the at least one image depicts at least one object;
11
obtaining at least one image, wherein the at least one image depicts at least one object;
40
inputting, by the one or more processors, the at least one image to at least one of a plurality of image plugins;
11
…analyzing the at least one image concurrently via each of at least two of a plurality of image plugins, wherein the at least two of the plurality of image plugins are configured to identify one or more different aspects of the same object depicted in the at least one image; …
Where “analyzing the at least one image concurrently via each of at least two of a plurality of image plugins” in claim 11 includes “inputting the at least one image to at least one of a plurality of image plugins” as disclosed in claim 1:
1. …inputting, by the one or more processors, the at least one image to at least one of a plurality of image plugins;
analyzing, by the one or more processors, the at least one image concurrently via each of at least two of the plurality of image plugins, wherein the at least two of the plurality of image plugins are configured to identify one or more different aspects of the same at least one object depicted in the at least one image; …
40
analyzing, by the one or more processors, the at least one image via the at least one of the plurality of image plugins, wherein the at least one of the plurality of image plugins is configured to identify one or more different aspects of the at least one object depicted in the at least one image, and
11
…analyzing the at least one image concurrently via each of at least two of a plurality of image plugins, wherein the at least two of the plurality of image plugins are configured to identify one or more different aspects of the same object depicted in the at least one image;
Where the “at least two of a plurality of image plugins” includes “at least one of the plurality of image plugins”.
40
wherein the analyzing utilizes at least one object detection model or image recognition model;
17
The computer system of claim 11, wherein analyzing the at least one image via the at least two of the plurality of image plugins utilizes at least one object detection model or image recognition model.
40
determining, by the one or more processors, metadata related to the at least one image based on the at least one of the plurality of image plugins;
11
… analyzing the at least one image concurrently via each of at least two of a plurality of image plugins…;
…determining metadata related to the at least one image based on the analyzing the at least one image; …
Where the “at least two of a plurality of image plugins” includes “at least one of the plurality of image plugins”.
40
sorting, by the one or more processors, the at least one image to generate at least one sorted image; and
11
…sorting the at least one filtered image to generate at least one sorted image; …
40
displaying, by the one or more processors, the at least one sorted image, wherein the displaying is based on one or more predetermined criteria,
11
…displaying the at least one sorted image according to a user interaction with the navigation controls on the webpage; …
Where the “user interaction” is “one or more predetermined criteria” predetermined by the user.
40
wherein the predetermined criteria includes one or more different aspects of the at least one object, the one or more different aspects comprising a side view, front view, or rear view of the sorted object.
19
The system of claim 18, wherein the at least one object is a vehicle, and the one or more different aspects of the same at least one object correspond to a side view, front view, or rear view of the vehicle, and wherein the functions further include:
replacing, based on a user selection, the at least one object in the first display area with an enlarged image of one of the two or more thumbnail images.
Where the user interaction of “user selection” is further based on “one or more different aspects of the same at least one object” is the “predetermined criteria” (i.e., the ”user selection” is a selection predetermined by the user) including “one or more different aspects”.
Claim
Instant Application (18/505,523)
Claim
U.S. Patent 11,816,823
(App. No. 17/079,838)
21
A computer-implemented method for processing images, the method comprising:
1
A computer-implemented method for processing images, the method comprising:
21
obtaining, by one or more processors, at least one image for analyzing, wherein the at least one image depicts at least one object;
1
obtaining, by one or more processors, at least one image for analyzing, wherein the at least one image depicts at least one object;
21
inputting, by the one or more processors, the at least one image to at least one of a plurality of image plugins;
1
…inputting, by the one or more processors, the at least one image to at least one of a plurality of image plugins;
21
analyzing, by the one or more processors, the at least one image via the at least one of the plurality of image plugins, wherein the at least one of the plurality of image plugins is configured to identify one or more different aspects of the at least one object depicted in the at least one image;
1
analyzing, by the one or more processors, the at least one image via the at least one of the plurality of image plugins, wherein the at least one of the plurality of image plugins is configured to identify one or more different aspects of the same at least one object depicted in the at least one image;
21
determining, by the one or more processors, metadata related to the at least one image based on the at least one of the plurality of image plugins;
1
determining, by the one or more processors, metadata related to the at least one image based on the at least one of the plurality of image plugins;
21
sorting, by the one or more processors, the at least one image to generate at least one sorted image; and
1
…sorting, by the one or more processors, the at least one filtered image to generate at least one sorted image; and
21
displaying, by the one or more processors, the at least one sorted image.
1
causing display of, by the one or more processors, the at least one sorted image…:
22
The computer-implemented method of claim 21 further including:
replacing, by the one or more processors, based on a user selection, the at least one sorted image with an enlarged image of one of two or more thumbnail images.
2
The computer-implemented method of claim 1, further including:
replacing, by the one or more processors, based on a user selection, the at least one object in the first display area with an enlarged image of one of the two or more thumbnail images.
Where the “at least one object in the first display area” is the “at least one sorted image” as disclosed in claim 1:
1. …causing display of, by the one or more processors, the at least one sorted image based on an organizational sequence of a webpage, wherein the causing display comprises:
causing display of the at least one object in a first display area;… .
23
The computer-implemented method of claim 21, wherein displaying the at least one sorted image is based on one or more predetermined criteria.
1
…causing display of, by the one or more processors, the at least one sorted image according to a user interaction with the navigation controls on the webpage.
Where the “user interaction” is “one or more predetermined criteria” predetermined by the user.
24
The computer-implemented method of claim 23, wherein the predetermined criteria includes one or more different aspects of the at least one object, the one or more different aspects comprising a side view, front view, or rear view of the sorted object.
8
The computer-implemented method of claim 1, wherein the at least one object is a vehicle, and the one or more different aspects of the same at least one object correspond to a side view, front view, or rear view of the vehicle, and wherein the method further includes:
replacing, based on a user selection, the at least one object in the first display area with an enlarged image of one of the two or more thumbnail images.
Where the “user selection” is further based on “one or more different aspects of the same at least one object” is the “predetermined criteria” (i.e., the ”user selection”) including “one or more different aspects”.
25
The computer-implemented method of claim 24, further including:
analyzing the at least one image via at least two of the plurality of image plugins; and
after the analyzing the at least one image via each of the at least two image plugins, removing the at least one image from a queue of images.
5
The computer-implemented method of claim 4, further including:
analyzing the at least one image via the at least two of the plurality of image plugins; and
after the analyzing the at least one image via each of the at least two image plugins, removing the at least one image from the queue of images.
26
The computer-implemented method of claim 21, further including:
analyzing at least two or more images stored in a queue of images via at least two of the plurality of image plugins.
4
The computer-implemented method of claim 3, wherein at least two or more images are stored in the queue of images and wherein the plurality of image plugins are arranged in a centralized location, the method further including:
analyzing, in parallel, the at least two or more images stored in the queue of images via the at least two of the plurality of image plugins.
27
The computer-implemented method of claim 21, further including:
analyzing the at least one image in parallel via the plurality of image plugins.
7
The computer-implemented method of claim 1, further including:
analyzing the at least one image in parallel via the plurality of image plugins.
28
The computer-implemented method of claim 21, further including:
filtering, by the one or more processors, the at least one image based on one or more rule sets to generate at least one filtered image.
1
A computer-implemented method for processing images, the method comprising:
…filtering, by the one or more processors, the at least one image based on one or more rule sets to generate at least one filtered image;…
29
The computer-implemented method of claim 28, wherein the one or more rule sets includes the metadata related to the at least one image.
9
The computer-implemented method of claim 1, wherein the one or more rule sets includes the metadata related to the at least one image.
30
The computer-implemented method of claim 23, wherein analyzing the at least one image via the at least one of the plurality of image plugins utilizes at least one object detection model or image recognition model.
10
The computer-implemented method of claim 3, wherein analyzing the at least one image via the at least two of the plurality of image plugins utilizes at least one object detection model or image recognition model.
31
A computer system for processing images, the computer system comprising:
11
A computer system for processing images, the computer system comprising:
31
a memory having processor-readable instructions stored therein; and
at least one processor configured to access the memory and execute the processor-readable instructions, which when executed by the at least one processor configures the at least one processor to perform a plurality of functions, including functions for:
11
a memory having processor-readable instructions stored therein; and
at least one processor configured to access the memory and execute the processor-readable instructions, which when executed by the at least one processor configures the at least one processor to perform a plurality of functions, including functions for:
31
obtaining at least one image for analyzing, wherein the at least one image depicts at least one object;
11
obtaining at least one image, wherein the at least one image depicts at least one object;
…
31
inputting the at least one image to at least one of a plurality of image plugins;
analyzing the at least one image via the at least one of the plurality of image plugins, wherein the at least one of the plurality of image plugins is configured to identify one or more different aspects of the at least one object depicted in the at least one image;
11
analyzing the at least one image via at least one of a plurality of image plugins, wherein the at least one of the plurality of image plugins is configured to identify one or more different aspects of the same at least one object depicted in the at least one image;
Where “analyzing the at least one image via at least one of a plurality of image plugins” in claim 11 includes “inputting the at least one image to at least one of a plurality of image plugins” as disclosed in claim 1:
1. …inputting, by the one or more processors, the at least one image to at least one of a plurality of image plugins;
analyzing, by the one or more processors, the at least one image via the at least one of the plurality of image plugins, wherein the at least one of the plurality of image plugins is configured to identify one or more different aspects of the same at least one object depicted in the at least one image; …
31
determining metadata related to the at least one image based on the at least one of the plurality of image plugins;
11
determining metadata related to the at least one image based on the analyzing the at least one image;
31
sorting the at least one image to generate at least one sorted image; and
11
…sorting the at least one filtered image to generate at least one sorted image;
31
displaying the at least one sorted image, wherein the displaying is based on a predetermined order.
11
causing display of the at least one sorted image based on an organizational sequence of a webpage, wherein the causing display comprises: … .
Where the “organizational sequence” is a “predetermined order”.
32
The computer system of claim 31, wherein the functions further include:
replacing based on a user selection, the at least one sorted image with an enlarged image of one of two or more thumbnail images.
12
12. The computer system of claim 11, wherein the functions further include:
replacing based on a user selection, the at least one object in the first display area with an enlarged image of one of the two or more thumbnail images.
Where the “at least one object in the first display area” is the “at least one sorted image” as disclosed in claim 11:
11. …causing display of the at least one sorted image based on an organizational sequence of a webpage, wherein the causing display comprises:
causing display of the at least one object in a first display area;… .
33
The computer system of claim 31, wherein displaying the at least one sorted image is based on one or more predetermined criteria.
11
A computer system for processing images, the computer system comprising:
…filtering the at least one image based on a predetermined metadata to generate at least one filtered image;
sorting the at least one filtered image to generate at least one sorted image; and
causing display of the at least one sorted image… .
Where the “at least one sorted image” being displayed is generated by “filtering the at least one image based on a predetermined metadata” is “displaying the at least one sorted image is based on one or more predetermined criteria”.
34
The computer system of claim 33, wherein the predetermined criteria includes one or more different aspects of the at least one object, the one or more different aspects comprising a side view, front view, or rear view of the sorted object.
14
14. The computer system of claim 11, wherein the at least one object is a vehicle, and the one or more different aspects of the same at least one object correspond to a side view, front view, or rear view of the vehicle, and wherein the functions further include:…
Where the “one or more different aspects” is “predetermined criteria” as the “one or more different aspects” is identified prior to being based upon in the displaying of the at least one sorted image as disclosed in claim 11:
11. …analyzing the at least one image via at least one of a plurality of image plugins, wherein the at least one of the plurality of image plugins is configured to identify one or more different aspects of the same at least one object depicted in the at least one image; … .
35
The computer system of claim 34, wherein the functions further include analyzing the at least one image via at least two of the plurality of image plugins; and
after the analyzing the at least one image via each of the at least two image plugins, removing the at least one image from a queue of images.
15
The computer system of claim 13, wherein the functions further include analyzing the at least one image via each of the at least two image plugins, and
after the analyzing the at least one image via each of the at least two image plugins, removing the at least one image from the queue of images.
36
The computer system of claim 31, wherein the functions further include:
analyzing at least two or more images stored in a queue of images via at least two of the plurality of image plugins.
11
…storing the at least one image in a queue of images;
analyzing the at least one image via at least one of a plurality of image plugins…; …
Where analyzing the “at least one image in a queue of images” includes “analyzing at least two or more images stored in a queue of images via at least two of the plurality of image plugins” as disclosed in claim 15:
15. The computer system of claim 13, wherein the functions further include analyzing the at least one image via each of the at least two image plugins, and after the analyzing the at least one image via each of the at least two image plugins, removing the at least one image from the queue of images.
37
The computer system of claim 31, wherein the functions further include:
analyzing the at least one image in parallel via the plurality of image plugins.
18
The computer system of claim 11, wherein the functions further include
analyzing the at least one image in parallel via the plurality of image plugins.
38
The computer system of claim 31, wherein the functions further include:
filtering the at least one image based on one or more rule sets to generate at least one filtered image.
11
…filtering the at least one image based on a predetermined metadata to generate at least one filtered image; …
Where “predetermined metadata” is “one or more rule sets”.
39
The computer system of claim 38, wherein the one or more rule sets includes the metadata related to the at least one image.
11
…filtering the at least one image based on a predetermined metadata to generate at least one filtered image; …
Where “predetermined metadata” is “one or more rule sets”.
40
A computer-implemented method for processing images, the method comprising:
11
A computer system for processing images, the computer system comprising:
…
at least one processor configured to access the memory and execute the processor-readable instructions, which when executed by the at least one processor configures the at least one processor to perform a plurality of functions, including functions for:
40
obtaining, by one or more processors, at least one image for analyzing, wherein the at least one image depicts at least one object;
11
obtaining at least one image, wherein the at least one image depicts at least one object;
40
inputting, by the one or more processors, the at least one image to at least one of a plurality of image plugins;
11
…analyzing the at least one image via at least one of a plurality of image plugins, wherein the at least one of the plurality of image plugins is configured to identify one or more different aspects of the same at least one object depicted in the at least one image; …
Where “analyzing the at least one image via at least one of a plurality of image plugins” in claim 11 includes “inputting the at least one image to at least one of a plurality of image plugins” as disclosed in claim 1:
1. …inputting, by the one or more processors, the at least one image to at least one of a plurality of image plugins;
analyzing, by the one or more processors, the at least one image via the at least one of the plurality of image plugins, wherein the at least one of the plurality of image plugins is configured to identify one or more different aspects of the same at least one object depicted in the at least one image; …
40
analyzing, by the one or more processors, the at least one image via the at least one of the plurality of image plugins, wherein the at least one of the plurality of image plugins is configured to identify one or more different aspects of the at least one object depicted in the at least one image, and
11
…analyzing the at least one image via at least one of a plurality of image plugins, wherein the at least one of the plurality of image plugins is configured to identify one or more different aspects of the same at least one object depicted in the at least one image; …
40
wherein the analyzing utilizes at least one object detection model or image recognition model;
19
The computer system of claim 13, wherein analyzing the at least one image via the at least two of the plurality of image plugins utilizes at least one object detection model or image recognition model.
40
determining, by the one or more processors, metadata related to the at least one image based on the at least one of the plurality of image plugins;
11
…analyzing the at least one image via at least one of a plurality of image plugins…;
…determining metadata related to the at least one image based on the analyzing the at least one image; …
40
sorting, by the one or more processors, the at least one image to generate at least one sorted image; and
11
…sorting the at least one filtered image to generate at least one sorted image; …
40
displaying, by the one or more processors, the at least one sorted image, wherein the displaying is based on one or more predetermined criteria,
11
…causing display of the at least one sorted image according to a user interaction with the navigation controls on the webpage.
Where the “user interaction” is “one or more predetermined criteria” predetermined by the user.
40
wherein the predetermined criteria includes one or more different aspects of the at least one object, the one or more different aspects comprising a side view, front view, or rear view of the sorted object.
14
The computer system of claim 11, wherein the at least one object is a vehicle, and the one or more different aspects of the same at least one object correspond to a side view, front view, or rear view of the vehicle, and wherein the functions further include:
replacing, based on a user selection, the at least one object in the first display area with an enlarged image of one of the two or more thumbnail images.
Where the user interaction of “user selection” is further based on “one or more different aspects of the same at least one object” is the “predetermined criteria” (i.e., the ”user selection” is a selection predetermined by the user) including “one or more different aspects”.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 21, 23-26, 28-31, 33-36, and 38-40 are rejected under 35 U.S.C. 103 as being unpatentable over Otten et al. (Otten; US 11,270,168 B1) in view of Endras et al. (Endras; US 2019/0294878 A1).
Regarding claim 21, Otten discloses a computer-implemented method for processing images, the method comprising:
obtaining, by one or more processors, at least one image for analyzing, wherein the at least one image depicts at least one object (lines 65-67 of col. 6 to lines 1-3 of col. 7, recite(s)
[lines 65-67 of col. 6 to lines 1-3 of col. 7] “…First, the dealership 102.sub.1 obtains a set of vehicle images, e.g., the vehicle images 202.sub.1-202.sub.j. The set of vehicle images may be obtained through, for example, a third-party photographer hired to take photographs of the vehicles on the lot of the dealership 102.sub.1.”
, where the “vehicle images” include at least one image for analyzing);
inputting, by the one or more processors, the at least one image to at least one(lines 17-34 of col. 7, recite(s)
[lines 17-34 of col. 7] “Third, the general flow of operations performed by the VIC system 100 continues with an analysis of the vehicle images 202.sub.1-202.sub.j by the image classification machine learning logic (ML) 112. The analysis performed by the image classification ML logic 112 results in a classification of each of the vehicle images 202.sub.1-202.sub.j, e.g., an assignment of a classification identifier (ID) to each of the vehicle images 202.sub.1-202.sub.j. In one embodiment, a classification ID as assigned to a vehicle image may correspond to an item type descriptor that indicates an image assigned a particular classification ID illustrates a particular feature or view. More specifically, the image classification ML logic 112 applies a machine learning model previously generated by the image classification ML logic 112. The machine learning model is generated through supervised learning using a training set to generate a mapping function that represents an algorithm for mapping input data to an output (e.g., a vehicle image to a classification ID, or to a listing of features or views).”
, where the “vehicle images” are analyzed by the “image classification machine learning logic [or model]” is inputting the at least one image to at least one image plugin);
analyzing, by the one or more processors, the at least one image via the at least one(lines 17-34 of col. 7—see preceding citation immediately above—, where the outputted classification of a “listing of features or views” are identified one or more different aspects of the vehicle);
determining, by the one or more processors, metadata related to the at least one image based on the at least one of the plurality of image plugins (lines 17-34 of col. 7—see preceding citation immediately above—, where lines 8-17 of col. 8 further recite(s):
[lines 8-17 of col. 8] “Fourth, upon assignment of a classification ID to each of the vehicle images 202.sub.1-202.sub.j, the vehicle images 202.sub.1-202.sub.j and their corresponding classification IDs, e.g., the classification IDs 204.sub.1-204.sub.j, are again stored in the vehicle image data store 110. Additional details regarding the image classification process and storage are illustrated in FIG. 3. Fifth, the feature content data store 114 stores feature content, which includes, at least, textual information describing one or more features or views of specific vehicles on the dealer's lot, e.g., the vehicle.sub.VIN=123XYZ. …”
, where the “classification IDs… stored in the vehicle image data store” is metadata related to the at least one image);
sorting, by the one or more processors, the at least one image to generate at least one sorted image (lines 35-43 and 50-62 of col. 9, recite(s)
[lines 35-43 of col. 9] “In one embodiment, when providing input to select a specific vehicle, the consumer also selects a view or feature of the vehicle, e.g., a “front view.” For example, in order to determine a vehicle image to display and the corresponding classification ID, the widget compares the consumer selected feature or view to a dataset, e.g., a table storing features/view and the corresponding classification IDs, that indicates a classification ID corresponding to each feature and/or view. …”
[lines 50-62 of col. 9] “By comparing consumer input or the default view to the dataset, the widget may determine the classification ID of the selected feature or view, which enables the widget to display the one or more portions of feature content based on the image-to-feature association. Upon determining the classification ID, the widget causes the rendering of the vehicle image and the one or more portions of feature content that correspond to the determined one or more classification IDs, with the rendering occurring within the GUI 122 of the website 120. As the user selects a second feature or view for display, the widget again references the image-to-feature association for instruction as to which vehicle image and portions of feature content to render.”
, where “determin[ing] a vehicle image to display” based on a “consumer input” from the plurality of vehicle images by “image-to-feature association” comparison is sorting the at least one image from the vehicle image dataset to generate at least one sorted image (e.g., the “vehicle image to display”)); and
displaying, by the one or more processors, the at least one sorted image (lines 35-43 and 50-62 of col. 9—see preceding citation immediately above—, where the determined “vehicle image to display” based on a “consumer input” is displaying the at least one sorted image).
Where Otten does not specifically disclose
inputting, by the one or more processors, the at least one image to at least one of a plurality of image plugins; and
analyzing, by the one or more processors, the at least one image via the at least one of a plurality of image plugins, wherein the at least one of a plurality of image plugins is configured to identify one or more different aspects of the at least one object depicted in the at least one image;
Endras teaches in the same field of endeavor of analyzing a vehicle image via at least one image plugin to identify one or more different views of a vehicle in the image
inputting, by the one or more processors, the at least one image to at least one of a plurality of image plugins (para(s). [0026-0027], recite(s)
[0026] “In one implementation, the detection of target view of the vehicle, such as interior target views and/or exterior target views, may use a machine learned model. In particular, in order to recognize different vehicle views from the video stream, such a model may be trained on a server, which may be used by the mobile device. In a specific implementation, automatic recognition of major vehicle views (interior and exterior) may rely on mobile iOS compatible Deep Neural Network models trained on frames extracted from videos of these target views, and images collected for vehicle condition reports. Alternatively, recognition of major vehicle views (interior and exterior) may rely on mobile Android compatible Deep Neural Network models. As discussed further below, the machine learning models may be generated on a device separate from the smartphone (such as a server) and may be downloaded to the smartphone for use by the application executed on the smartphone.”
[0027] “One example of a neural network model is a convolutional neural network model, which may comprise a fee-forward neural network that may be applied to analyzing visual imagery, such as associated with the damage analysis discussed herein. The system may feed images in order for the model to “learn” the features of an image with a certain perspective (such as the front view). In one implementation, the deep learning process may entail supplying images (e.g., more than 50 thousand images) for a specific view (e.g., driver side) in order to model the feature(s) indicative of a driver side image. This process may be repeated for all views.”
, where the “machine learning models” trained for “different vehicle views” is a plurality of image plugins (e.g., different machine learning models for each different vehicle view)); and
analyzing, by the one or more processors, the at least one image via the at least one of a plurality of image plugins, wherein the at least one of a plurality of image plugins is configured to identify one or more different aspects of the at least one object depicted in the at least one image (para(s). [0026-0027]—see preceding citations immediately above—, where the plurality of “machine learning models” trained “to recognize different vehicle views” from a plurality of vehicle images (e.g., a “video stream”) are a plurality of image plugins identifying one or more different aspects (e.g., “different vehicle views”) of the at least one object (e.g., “vehicle”) by analyzing the at least one image (e.g., an image in a “video stream”)).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the presently filed invention to modify the system of Otten to incorporate a plurality of image plugins to improve identifying the one or more different aspects of the image in different vehicle view images by incorporating a plurality of plugins trained to detect different aspects (e.g., features) corresponding to a different views of specific vehicles as taught by Endras above.
Regarding claim 23, Otten in view of Endras discloses the computer-implemented method of claim 21, wherein Otten further discloses displaying the at least one sorted image is based on one or more predetermined criteria (lines 50-62 of col. 9, recite(s)
[lines 50-62 of col. 9] “By comparing consumer input or the default view to the dataset, the widget may determine the classification ID of the selected feature or view, which enables the widget to display the one or more portions of feature content based on the image-to-feature association. Upon determining the classification ID, the widget causes the rendering of the vehicle image and the one or more portions of feature content that correspond to the determined one or more classification IDs, with the rendering occurring within the GUI 122 of the website 120. As the user selects a second feature or view for display, the widget again references the image-to-feature association for instruction as to which vehicle image and portions of feature content to render.”
, where the user-selected “feature or view for display” is one or more predetermined criteria predetermined by the user).
Regarding claim 24, Otten in view of Endras discloses the computer-implemented method of claim 23, wherein Otten further discloses the predetermined criteria includes one or more different aspects of the at least one object, the one or more different aspects comprising a side view, front view, or rear view of the sorted object (lines 64-67 of col. 7 to lines 1-7 of col. 8, recite(s)
[lines 64-67 of col. 7 to lines 1-7 of col. 8] “As one example, the image classification may assign one of a plurality of classification IDs to a vehicle image, with some classification IDs specifying, among others, “Front ¾ View Drivers,” “Front ¾ View Passenger,” “Side View Passenger,” “Rear ¾ View Passenger,” “Side View Drivers,” “Rear View,” “Roof/Sunroof,” “Driver's Dashboard/Centre Console,” “Center Console,” “Door Controls,” etc. Additionally, the assignment of classification IDs to each portion of the feature content may be predetermined and performed upon, or prior to, storage of portions of feature content in the feature content data store 114, as discussed below.”
, where the one or more different aspects include at least a side view (e.g., “Side View Passenger,” “Side View Drivers,” etc.), a front view (e.g., “Front ¾ View Drivers,” “Front ¾ View Passenger,” etc.), and a rear view (e.g., “Rear View,” etc.) of the vehicle).
Regarding claim 25, Otten in view of Endras discloses the computer-implemented method of claim 24, wherein Endras further teaches the computer-implemented method of claim 24 further including:
analyzing the at least one image via at least two of the plurality of image plugins (para(s). [0026-0027]—see citations in claim 21 above—, where the “machine learning models” trained for “different vehicle views” includes at least two of a plurality of image plugins (e.g., different machine learning models for each different vehicle view)); and
after the analyzing the at least one image via each of the at least two image plugins, removing the at least one image from a queue of images (para(s). [0105], recite(s)
[0105] “If so, at 716, the application checks the view recognizer, which is configured to generate probabilities as to different views for the frame. At 718, the probabilities generated are determined whether above a threshold. If not, flow diagram 700 moves to 714 and ignores the frame. If so, at 722, the application keeps the frame information (e.g., the time, compass data, blurry metric value, view recognizer label) for post processing. One example of post processing comprises damage analysis, such as analyzing the frame for damage indicative thereto. As discussed above, the specific view of the frame may be use in the damage analysis (e.g., whether the specific view is an exterior or an interior view).…”
, where not keeping the frame information (i.e., “ignor[ing] the frame” in a sequence of images) after determining that the probability of a view is not above a threshold via each of the at least two image plugins (i.e., “view recognizer” checks the probabilities from the “machine learning models” trained for each “different vehicle views” as disclosed in para(s). [0026-0027] in the preceding limitation above) is removing the at least one image from a queue of images (i.e., removing the “frame” from the sequence of images for further processing—e.g., “post processing”—as recited in step 722 in para. [0105] above)).
Regarding claim 26, Otten in view of Endras discloses the computer-implemented method of claim 21, wherein Endras further teaches the computer-implemented method of claim 21 further including:
analyzing at least two or more images stored in a queue of images via at least two of the plurality of image plugins (para(s). [0026-0027] and [0105]—see citations in claim 25 above—, where the sequence of “frame[s]” in a video stream is a queue of images comprising at least two or more images; where each frame is analyzed by at least two of the plurality of image plugins (i.e., the “machine learning models” trained for “different vehicle views”)).
Regarding claim 28, Otten in view of Endras discloses the computer-implemented method of claim 21, wherein Otten further discloses the computer-implemented method of claim 21 further including:
filtering, by the one or more processors, the at least one image based on one or more rule sets to generate at least one filtered image (lines 35-43 and 50-62 of col. 9—see citations in claim 21 limitation “sorting…” above—, where “determin[ing] a vehicle image to display” based on a plurality of user selections (e.g., a first “selected feature or view” and a “second feature or view for display”) is generating at least one filtered image (i.e., the “vehicle image to display”) based on one or more rule sets predetermined by a user (e.g., user selected “feature[s] or view[s]”)).
Regarding claim 29, Otten in view of Endras discloses the computer-implemented method of claim 28, wherein Otten further discloses the one or more rule sets includes the metadata related to the at least one image (lines 35-43 and 50-62 of col. 9—see citations in claim 21 limitation “sorting…” above—, where the user selected features (e.g., a first “selected feature or view” and a “second feature or view for display”) determine the “classification ID” of the “vehicle image to display” is the one or more rulesets (e.g., user selected features) including metadata (e.g., “classification ID”) related to the at least one image (e.g., the “vehicle image to display”)).
Regarding claim 30, Otten in view of Endras discloses the computer-implemented method of claim 23, wherein Otten further discloses analyzing the at least one image via the at least one of the plurality of image plugins utilizes at least one object detection model or image recognition model (lines 17-34 of col. 7—see citation in claim 21 limitation “inputting…” above—, where the “image classification machine learning logic [or model]” is at least an image recognition model for recognizing a ”particular feature or view”).
Regarding claim 31, the claim recites similar limitations to claim 21, except claim 31 recites the additional limitations:
a memory having processor-readable instructions stored therein; and …
displaying the at least one sorted image, wherein the displaying is based on a predetermined order.
Otten further discloses the additional limitations:
a memory having processor-readable instructions stored therein (lines 53-67 of col. 4 to lines 1-5 of col. 5, recite(s)
[lines 53-67 of col. 4 to lines 1-5 of col. 5] “Alternatively, the component (or logic) may be software… The software may be stored in any type of a suitable non-transitory storage medium, or transitory storage medium… Examples of non-transitory storage medium may include, but are not limited or restricted to a programmable circuit; semiconductor memory; non-persistent storage such as volatile memory (e.g., any type of random access memory “RAM”); or persistent storage such as non-volatile memory (e.g., read-only memory “ROM,” power-backed RAM, flash memory, phase-change memory, etc.), a solid-state drive, hard disk drive, an optical disc drive, or a portable memory device. As firmware, the executable code may be stored in persistent storage.”
); and …
displaying the at least one sorted image, wherein the displaying is based on a predetermined order (lines 35-43 and 50-62 of col. 9—see citation in claim 21 limitation “sorting…” above—, where the “vehicle image to display” is displayed based on “image-to-feature association” using a matching “classification ID” in a dataset (e.g., a “table storing features/view and the corresponding classification IDs”) is displaying based on a predetermined order of “the determined one or more classification IDs” corresponding to the classification IDs in the stored dataset).
Therefore, claim 31 recites similar limitations to claim 21 and is rejected for similar rationale and reasoning (see the analysis for claim 21 above).
Regarding claim 33, the claim recites similar limitations to claim 23 and is rejected for similar rationale and reasoning (see the analysis for claim 23 above).
Regarding claim 34, the claim recites similar limitations to claim 24 and is rejected for similar rationale and reasoning (see the analysis for claim 24 above).
Regarding claim 35, the claim recites similar limitations to claim 25 and is rejected for similar rationale and reasoning (see the analysis for claim 25 above).
Regarding claim 36, the claim recites similar limitations to claim 26 and is rejected for similar rationale and reasoning (see the analysis for claim 26 above).
Regarding claim 38, the claim recites similar limitations to claim 28 and is rejected for similar rationale and reasoning (see the analysis for claim 28 above).
Regarding claim 39, the claim recites similar limitations to claim 29 and is rejected for similar rationale and reasoning (see the analysis for claim 29 above).
Regarding claim 40, the claim recites similar limitations to claim 21, except claim 40 recites the additional limitations:
analyzing, by the one or more processors, the at least one image via the at least one of the plurality of image plugins, wherein the at least one of the plurality of image plugins is configured to identify one or more different aspects of the at least one object depicted in the at least one image, and wherein the analyzing utilizes at least one object detection model or image recognition model; and …
displaying, by the one or more processors, the at least one sorted image, wherein the displaying is based on one or more predetermined criteria includes one or more different aspects of the at least one object, the one or more different aspects comprising a side view, front view, or rear view of the sorted object.
Otten in view of Endras further disclose the additional limitations:
analyzing, by the one or more processors, the at least one image via the at least one of the plurality of image plugins, wherein the at least one of the plurality of image plugins is configured to identify one or more different aspects of the at least one object depicted in the at least one image (lines 17-34 of col. 7 of Otten and para(s). [0026-0027] of Endras—see the combination of Otten and Endras in the similar limitation in claim 21 above), and wherein the analyzing utilizes at least one object detection model or image recognition model (Otten; lines 17-34 of col. 7—see similar limitation in claim 30 above—, where the “image classification machine learning logic [or model]” is at least an image recognition model for recognizing a ”particular feature or view”); and …
displaying, by the one or more processors, the at least one sorted image (Otten; lines 35-43 and 50-62 of col. 9—see similar limitation in claim 21 above above—, where the determined “vehicle image to display” based on a “consumer input” is displaying the at least one sorted image), wherein the displaying is based on one or more predetermined criteria includes one or more different aspects of the at least one object, the one or more different aspects comprising a side view, front view, or rear view of the sorted object (Otten; lines 50-62 of col. 9 and lines 64-67 of col. 7 to lines 1-7 of col. 8—see similar limitations in claims 23-24 above—, where the user-selected “feature or view for display” is one or more predetermined criteria predetermined by the user; wherein the predetermined criteria include the one or more different aspects of at least a side view (e.g., “Side View Passenger,” “Side View Drivers,” etc.), a front view (e.g., “Front ¾ View Drivers,” “Front ¾ View Passenger,” etc.), and a rear view (e.g., “Rear View,” etc.) of the vehicle).
Therefore, claim 40 recites similar limitations to claim 21 and is rejected for similar rationale and reasoning (see the analysis for claim 21 above).
Claims 22 and 32 are rejected under 35 U.S.C. 103 as being unpatentable over Otten in view of Endras as applied to claim 21 above, and further in view of Sieger (US 2011/0313936 A1).
Regarding claim 22, Otten in view of Endras discloses the computer-implemented method of claim 21, wherein Sieger teaches in the same field of endeavor of dealership webpages the computer-implemented method of claim 21 further including:
replacing, by the one or more processors, based on a user selection, the at least one sorted image with an enlarged image of one of two or more thumbnail images (para(s). [0031-0033], recite(s)
[0031] “After the user selects the "get my price" button 40, a next page 46 is displayed at shown in FIG. 4. The page 46 shows an image 48 of a vehicle that matches exactly or approximately the make, model and year of the vehicle information input by the user. ...”
[0032] “An algorithm may be used to determine which image or images to display to the user, for example based on the user input. For example, the algorithm may select for display a median price level of damage for that vehicle model and model year, which may be a different level of damage for a different model or model year. Other criteria for selecting the displayed images are within the scope of the invention. The displayed vehicle can be shown in a single image, in a plurality of images, in one or more video clips, by a drawing or by other display format. …”
[0033] “… To better assist the user in making this determination, a series of images 50 of the displayed vehicle are provided showing the vehicle from different angles, and showing different features of the vehicle. The images 50 are shown as so-called thumbnail images, or reduced size images, that are enlarged for display in the larger display window 48 upon selection by the user.”
, where the displayed “thumbnail images” are “enlarged for display in the larger display window 48 upon selection by the user” is replacing the at least one sorted image with an enlarged image of one of two or more thumbnail images by user selection).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the presently filed invention to modify the system of Otten in view of Endras to incorporate replacing the at least one sorted image with an enlarged image of one of two or more thumbnail images by user selection to allow for a user to better view the display sorted image on a webpage as taught by Sieger above.
Regarding claim 32, the claim recites similar limitations to claim 22 and is rejected for similar rationale and reasoning (see the analysis for claim 22 above).
Claims 27 and 37 are rejected under 35 U.S.C. 103 as being unpatentable over Otten in view of Endras as applied to claim 21 above, and further in view of Brouard et al. (Brouard; US 10,528,812 B1, cited in Applicant’s IDS filed November 9th, 2023).
Regarding claim 27, Otten in view of Endras discloses the computer-implemented method of claim 21, wherein Brouard teaches in the same field of endeavor of machine learning models taking images as input the computer-implemented method of claim 21 further including:
analyzing the at least one image in parallel via the plurality of image plugins (lines 56-67 of col. 8 to lines 1-10 of col. 9, recite(s)
[lines 56-67 of col. 8 to lines 1-10 of col. 9] “As described above, for recognizing and segmenting objects of different types in an input image, different CNN models may be trained such that each CNN model handles recognition and segmentation of one type of objects among the different types of objects. Training and deploying these distinct CNN models may be dispatched in a parallel manner into distributed physical or virtual computing resources. At the same time, for each of the distinct CNN models, the same model may be further dispatched in a parallel manner into distributed physical or virtual computing resources. As such, in one implementation, the CNN models for object recognition and segmentation may be distributed into parallel processes at two different levels to speed up the object recognition and segmentation processes, taking advantage of distributed computing architecture in, e.g., a cloud environment. The same parallel and distributed computing implementation may also be applied to the post-model filtering. In particular, distinct filters may be generated or trained for different types of objects and the filters may be run in parallel. At the same time, each filter may be run as multiple parallel and distributed instances, with each instance handling filtering and removal of false positives for one block.”
, where an input image can be processed by a plurality of “CNN models for object recognition and segmentation” in a “parallel manner” is analyzing at least one image in parallel via a plurality of image plugins).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the presently filed invention to modify the system of Otten in view of Endras to incorporate analyzing the at least one image in parallel via the plurality of image plugins to increase the speed of the machine learning models for different vehicle views as taught by Brouard above.
Regarding claim 37, the claim recites similar limitations to claim 27 and is rejected for similar rationale and reasoning (see the analysis for claim 27 above).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JULIA Z YAO whose telephone number is (571)272-2870. The examiner can normally be reached Monday - Friday (8:30AM - 5PM).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571)270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.Z.Y./Examiner, Art Unit 2666
/EMILY C TERRELL/Supervisory Patent Examiner, Art Unit 2666