Detailed Action
Status of Claims
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This Action is in reply to the Amendment filed on 12/23/2025. Claims 1-2, 4-5, 7-11, 13-17, 19-24 are currently pending and have been examined. Claims 3, 6, 12, and 18 stand cancelled. Claims 1, 10, 16 are amended have been amended.
Claim Objections
Claims 1, 10, and 16 are objected to for the following informality: “a vehicle, of the one or more vehicles, include” should read “a vehicle, of the one or more vehicles, includes” Appropriate correction is required.
Claim 16 is objected to for the following informality: “retrain the machine learning model” should read “retrain the first machine learning model” or “retrain the second machine learning model” Appropriate correction is required.
Claim Rejection - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-2, 4-5, 7-9, and 21-22 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
First, it is determined whether the claims are directed to a statutory category of invention. In the instant case, claims 1-2, 4-5, 7-9, and 21-22 are directed to a machine. Therefore, claims 1-2, 4-5, 7-9, and 21-22 are directed to statutory subject matter under Step 1 as described in MPEP 2106 (Step 1: YES).
The claims are then analyzed to determine whether the claims are directed to a judicial exception. In determining whether the claims are directed to a judicial exception, the claims are analyzed to evaluate whether the claims recite a judicial exception (Prong One of Step 2A), as well as analyzed to evaluate whether the claims recite additional elements that integrate the judicial exception into a practical application of the judicial exception (Prong Two of Step 2A).
Claim 1 recites at least the following limitations that are believed to recite an abstract idea:
transmit, to a user, image data indicating one or more images associated with a vehicle;
receive, from the user and for a particular image of the one or more images, user preference selection data indicating a selection by the user of one or more selected sections of the particular image corresponding to one or more vehicle features of the vehicle,
wherein the one or more selected sections are associated with one or more portions of the particular image and selected on the particular image;
wherein the system comprises:
means configured to allow selection of the one or more selected sections based on circling the one or more selected sections via touch interaction or using a tool, and
a designated entry field that receives user feedback input related to the one or more selected sections, in a form of at least one of text or audio; and
wherein the user feedback is received with the user preference selection data;
determine, using a first process that uses an image processing technique, the one or more vehicle features corresponding to the one or more selected sections, wherein the first process utilizes a plurality of reference images associated with a plurality of reference vehicles;
provide the user feedback as input to a second process, wherein the second process uses a natural language processing technique to process the user feedback, and where using the natural language processing technique comprises extracting a feature set of keywords from unstructured data, of the user feedback, for a set of observations;
receive, as output from the second process and based on using the natural language processing technique to process the user feedback, a user preference score corresponding to a user preference level associated with the one or more vehicle features, wherein the user preference score is determined based on patterns recognized in the extracted feature set by the second process;
store, under a user account associated with the user, user preference data indicating the user preference score and the one or more vehicle features;
perform a search based on a received vehicle search request, based on the stored user preference data indicating the user preference score and the one or more vehicle features;
transmit to the user and based performing the search, search results including vehicle data corresponding to one or more vehicles, wherein a vehicle, of the one or more vehicles, include one or more features that have a threshold degree of similarity with the determined one or more vehicle features;
update the second process based on other feedback related to traffic, by the user, associated with the search results;
wherein the updated second process allows providing a more relevant search result than a search that performs a textual search without accounting for inferences based on the user feedback related to the one or more selected sections; and
utilize the updated second process for subsequent requests.
.
The above limitations recite the concept of determining personal interests. These limitations, under their broadest reasonable interpretation, fall within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas, enumerated in MPEP 2106, in that they recite commercial interactions, e.g. sales activities/behaviors, and managing personal behavior or relationships or interactions between people, e.g., following rules or instructions. Accordingly, under Prong One of Step 2A, claims 1-2, 4-5, 7-9, and 21-22 recite an abstract idea (Step 2A, Prong One: YES).
Prong Two of Step 2A is the next step in the eligibility analyses and looks at whether the abstract idea is integrated into a practical application. This requires an additional element or combination of additional elements in the claims to apply, rely on, or user the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the exception.
In this instance, the claims recite the additional elements of:
A system comprising a memory and one or more processors, communicatively coupled to the memory, configured to perform steps
A user device
A user interface comprising a graphical interface
a cursor
Machine learning models
A machine learning model being trained based on images
A machine learning model being re-trained
A search engine
However, these elements do not amount to an improvement in the functioning of a computer or any other technology or technical field; apply the judicial exception with, or by use of, a particular machine; or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort to monopolize the exception.
In addition, the recitations are recited at a high level of generality and also do not amount to an improvement in the functioning of a computer or any other technology or technical field; apply the judicial exception with, or by use of, a particular machine; or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort to monopolize the exception.
The dependent claims also fail to recite elements which amount to an improvement in the functioning of a computer or any other technology or technical field; apply the judicial exception with, or by use of, a particular machine; or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort to monopolize the exception. For example, claims 2, 4-5, 7-9, and 21-22 are directed to the abstract idea itself and do not amount to an integration according to any one of the considerations above. Therefore the dependent claims do not create an integration for the same reasons.
Step 2B is the next step in the eligibility analyses and evaluates whether the claims recite additional elements that amount to an inventive concept (i.e., “significantly more”) than the recited judicial exception. According to Office procedure, revised Step 2A overlaps with Step 2B, and thus, many of the considerations need not be re-evaluated in Step 2B because the answer will be the same.
In Step 2A, several additional elements were identified as additional limitations:
A system comprising a memory and one or more processors, communicatively coupled to the memory, configured to perform steps
A user device
A user interface comprising a graphical interface
a cursor
Machine learning models
A machine learning model being trained based on images
A machine learning model being re-trained
A search engine
These additional limitations, including the limitations in the dependent claims, do not amount to an inventive concept because they were already analyzed under Step 2A and did not amount to a practical application of the abstract idea. Therefore, the claims lack one or more limitations which amount to an inventive concept in the claims.
For these reasons, the claims are rejected under 35 U.S.C. 101.
Claims 10-11, 13-15, 23 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
First, it is determined whether the claims are directed to a statutory category of invention. In the instant case, claims 10-11, 13-15, and 23 are directed to a process. Therefore, claims 10-11, 13-15, and 23 are directed to statutory subject matter under Step 1 as described in MPEP 2106 (Step 1: YES).
The claims are then analyzed to determine whether the claims are directed to a judicial exception. In determining whether the claims are directed to a judicial exception, the claims are analyzed to evaluate whether the claims recite a judicial exception (Prong One of Step 2A), as well as analyzed to evaluate whether the claims recite additional elements that integrate the judicial exception into a practical application of the judicial exception (Prong Two of Step 2A).
Claim 10 recites at least the following limitations that are believed to recite an abstract idea:
receiving user preference selection data indicating one or more selected sections of an image of a vehicle,
wherein the one or more selected sections correspond to one or more vehicle features of the vehicle, and
wherein the system comprises:
a means configured to allow selection of the one or more selected sections based on circling the one or more selected sections via touch interaction or using a tool; and
a designated entry field that receives user feedback input associated with the one or more selected sections, in a form of at least one of text or audio;
determining, using a first process that uses an image processing technique, the one or more vehicle features corresponding to the one or more selected sections,
wherein the first process utilizes a plurality of reference images associated with a plurality of reference vehicles;
receiving user feedback associated with the one or more vehicle features;
determining, based on processing the user feedback with a second process, one or more user preference scores corresponding to one or more user preference levels associated with the one or more vehicle features
wherein the second process uses a natural language processing technique to process the user feedback,
wherein using the natural language processing technique comprises extracting a feature set of keywords from unstructured data, of the user feedback, for a set of observations, and
wherein the user preference score is determined based on patterns recognized in the extracted feature set by the second process;
storing user preference data indicating the user preference score and the one or more vehicle features;
performing a search based on a received vehicle search request and based on the stored user preference data indicating the user preference score and the one or more vehicle features;
transmitting, based on performing the search, a search result including a list of one or more vehicles,
wherein a vehicle, of the one or more vehicles, include one or more features that have a threshold degree of similarity with the determined one or more vehicle features;
updating the second process based on other feedback associated with the list,
wherein the updated second process allows providing a more relevant search result than a search that performs a textual search without accounting for inferences based on the user feedback related to the one or more selected sections; and
utilizing the updated second process for subsequent requests.
.
The above limitations recite the concept of determining personal interests. These limitations, under their broadest reasonable interpretation, fall within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas, enumerated in MPEP 2106, in that they recite commercial interactions, e.g. sales activities/behaviors, and managing personal behavior or relationships or interactions between people, e.g., following rules or instructions. Accordingly, under Prong One of Step 2A, claims 10-11, 13-15, and 23 recite an abstract idea (Step 2A, Prong One: YES).
Prong Two of Step 2A is the next step in the eligibility analyses and looks at whether the abstract idea is integrated into a practical application. This requires an additional element or combination of additional elements in the claims to apply, rely on, or user the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the exception.
In this instance, the claims recite the additional elements of:
a system having one or more processors
A user interface on a user device, comprising a graphical interface
A cursor
Machine learning models
A machine learning model being trained based on images
re-training the machine learning models
A search engine
However, these elements do not amount to an improvement in the functioning of a computer or any other technology or technical field; apply the judicial exception with, or by use of, a particular machine; or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort to monopolize the exception.
In addition, the recitations are recited at a high level of generality and also do not amount to an improvement in the functioning of a computer or any other technology or technical field; apply the judicial exception with, or by use of, a particular machine; or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort to monopolize the exception.
The dependent claims also fail to recite elements which amount to an improvement in the functioning of a computer or any other technology or technical field; apply the judicial exception with, or by use of, a particular machine; or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort to monopolize the exception. For example, claims 11, 13-15, and 23 are directed to the abstract idea itself and do not amount to an integration according to any one of the considerations above. Therefore the dependent claims do not create an integration for the same reasons.
Step 2B is the next step in the eligibility analyses and evaluates whether the claims recite additional elements that amount to an inventive concept (i.e., “significantly more”) than the recited judicial exception. According to Office procedure, revised Step 2A overlaps with Step 2B, and thus, many of the considerations need not be re-evaluated in Step 2B because the answer will be the same.
In Step 2A, several additional elements were identified as additional limitations:
a system having one or more processors
A user interface on a user device, comprising a graphical interface
A cursor
Machine learning models
A machine learning model being trained based on images
re-training the machine learning models
A search engine
These additional limitations, including the limitations in the dependent claims, do not amount to an inventive concept because they were already analyzed under Step 2A and did not amount to a practical application of the abstract idea. Therefore, the claims lack one or more limitations which amount to an inventive concept in the claims.
For these reasons, the claims are rejected under 35 U.S.C. 101.
Claims 16-17, 19-21, 24 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
First, it is determined whether the claims are directed to a statutory category of invention. In the instant case, claims 16-17, 19-21, and 24 are directed to an article of manufacture. Therefore, claims 16-17, 19-21, and 24 are directed to statutory subject matter under Step 1 as described in MPEP 2106 (Step 1: YES).
The claims are then analyzed to determine whether the claims are directed to a judicial exception. In determining whether the claims are directed to a judicial exception, the claims are analyzed to evaluate whether the claims recite a judicial exception (Prong One of Step 2A), as well as analyzed to evaluate whether the claims recite additional elements that integrate the judicial exception into a practical application of the judicial exception (Prong Two of Step 2A).
Claim 16 recites at least the following limitations that are believed to recite an abstract idea:
transmit, to a user, image data indicating one or more images associated with a vehicle;
receive, from the user and for a particular image of the one or more images, user preference selection data indicating a selection of selected sections of the particular image corresponding to a set of vehicle features of the vehicle,
wherein the one or more selected sections are associated with one or more portions of the particular image and selected on the particular image,
wherein the system comprises:
means configured to allow selection of the one or more selected sections based on circling the one or more selected sections via touch interaction or using a tool, and
a designated entry field that receives user feedback input related to the one or more selected sections, in a form of at least one of text or audio; and
wherein the user feedback is received with the user preference selection data;
determine, using a first process that uses an image processing technique, the set of vehicle features corresponding to the one or more selected sections,
determine, based on processing the user feedback with a second process, a user preference score corresponding to a user preference level associated with the set of vehicle features,
wherein the second process uses a natural language processing technique to process the user feedback, and
wherein using the natural language processing technique comprises extracting a feature set of keywords from unstructured data, of the user feedback, for a set of observations;
store, under a user account associated with the user, user preference data indicating the user preference score and the one or more vehicle features;
perform a search based on a received vehicle search request, based on the stored user preference data indicating the user preference score and the one or more vehicle features;
transmit, based on performing the search, a search result that includes a list of one or more vehicles based on the one or more user preference levels,
wherein a vehicle, of the one or more vehicles, include one or more features that have a threshold degree of similarity with the determined one or more vehicle features;
update the process based on other feedback associated with the list,
wherein the updated process allows providing a more relevant search result than a search that performs a textual search without accounting for inferences based on the user feedback related to the one or more selected sections; and
utilize the updated process for subsequent requests.
The above limitations recite the concept of determining personal interests. These limitations, under their broadest reasonable interpretation, fall within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas, enumerated in MPEP 2106, in that they recite commercial interactions, e.g. sales activities/behaviors, and managing personal behavior or relationships or interactions between people, e.g., following rules or instructions. Accordingly, under Prong One of Step 2A, claims 16-17, 19-21, and 24 recite an abstract idea (Step 2A, Prong One: YES).
Prong Two of Step 2A is the next step in the eligibility analyses and looks at whether the abstract idea is integrated into a practical application. This requires an additional element or combination of additional elements in the claims to apply, rely on, or user the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the exception.
In this instance, the claims recite the additional elements of:
A non-transitory computer-readable medium storing a set of instructions, the set of instructions comprising: one or more instructions that, when executed by one or more processors of a device, cause the device to perform steps
A user device
A user interface on the user device comprising a graphical interface
Machine learning models
A machine learning model being trained based on a plurality of images
Retraining the machine learning models
A search engine
However, these elements do not amount to an improvement in the functioning of a computer or any other technology or technical field; apply the judicial exception with, or by use of, a particular machine; or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort to monopolize the exception.
In addition, the recitations are recited at a high level of generality and also do not amount to an improvement in the functioning of a computer or any other technology or technical field; apply the judicial exception with, or by use of, a particular machine; or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort to monopolize the exception.
The dependent claims also fail to recite elements which amount to an improvement in the functioning of a computer or any other technology or technical field; apply the judicial exception with, or by use of, a particular machine; or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort to monopolize the exception. For example, claims 17, 19-21, and 24 are directed to the abstract idea itself and do not amount to an integration according to any one of the considerations above. Therefore the dependent claims do not create an integration for the same reasons.
Step 2B is the next step in the eligibility analyses and evaluates whether the claims recite additional elements that amount to an inventive concept (i.e., “significantly more”) than the recited judicial exception. According to Office procedure, revised Step 2A overlaps with Step 2B, and thus, many of the considerations need not be re-evaluated in Step 2B because the answer will be the same.
In Step 2A, several additional elements were identified as additional limitations:
A non-transitory computer-readable medium storing a set of instructions, the set of instructions comprising: one or more instructions that, when executed by one or more processors of a device, cause the device to perform steps
A user device
A user interface on the user device comprising a graphical interface
Machine learning models
A machine learning model being trained based on a plurality of images
Retraining the machine learning models
A search engine
These additional limitations, including the limitations in the dependent claims, do not amount to an inventive concept because they were already analyzed under Step 2A and did not amount to a practical application of the abstract idea. Therefore, the claims lack one or more limitations which amount to an inventive concept in the claims.
For these reasons, the claims are rejected under 35 U.S.C. 101.
Allowable Subject Matter
Claims 1-2, 4-5, 7-11, 13-17, 19-24 are allowable over prior art though rejected on other grounds [e.g. 35 USC §101] as discussed above. The combination of elements of the claim as a whole are not found in the prior art.
Claims 1-2, 4-5, 7-11, 13-17, 19-24 would be allowable if rewritten to overcome the rejections under 35 USC §101 as set forth in this Office Action, and to include all of the limitations of the base claim and any intervening claims.
Upon review of the evidence at hand, it is hereby concluded that the totality of the evidence, alone or in combination, neither anticipates, reasonably teaches, nor renders obvious the below noted features of the Applicant’s invention.
In the present application, claims 1-2, 4-5, 7-11, 13-17, 19-24 are allowable over prior art. The most related prior art patent of record is Yu et al (US20100313141A1), hereinafter Yu, Fang et al (US20190251446A1), hereinafter Fang, Koister (US20110282814A1), hereinafter Koister, and Bissex et al (US20220284486A1), hereinafter Bissex.
Yu teaches systems for determining user preferences based on user interactions with images [Abstract], including showing a set of images to a user [0048], each depicting a product of a specific genre [0064]. The user provides feedback indicating whether they like or dislike a specific image [0049] or selects one image over another as being preferred [0064]. A machine learning algorithm is used to map features in the images to different genres of the products [0073], allowing the system to compute elements represented in the images visually and/or based on metadata [0071]. This algorithm is trained using visual and text data in a training set [0103]. Text analysis can be used on images themselves to extract features of the item [0097], and the system maintains a score representing a user’s preference for items or genres [0042], which is stored [0045] and used to make recommendations [0045] tailored to the user based on implicit preferences.
Yu is deficient in a number of ways, including at least a failure to teach or suggest at least that the product is a vehicle; and the ability to receive a selection by the user of one or more selected sections of the particular image corresponding to one or more vehicle features of the vehicle, wherein the one or more selected sections are associated with one or more portions of the particular image and selected on the particular image via the user interface, wherein the user preference selection data indicates user feedback associated with the one or more vehicle features, and wherein the user feedback is associated with at least one of comments or audio that is provided via the user interface, and received with the user preference selection data; identify, using a first machine learning model, the one or more vehicle features corresponding to the one or more selected sections; provide the user feedback as input to a second machine learning model, wherein the second machine learning model uses a natural language processing technique to process the user feedback; receive, as output from the second machine learning model, a user preference score corresponding to a user preference level associated with the one or more vehicle features; transmit to the user device and based on the user preference data, search results including vehicle data corresponding to one or more vehicles; re-train the second machine learning model based on other feedback associated with the search results; and utilize the re-trained second machine learning model for subsequent requests.
Fang teaches a system for visually-aware item recommendations [Abstract], including training an ML model from a training image dataset that includes implicit feedback [0060]. This model can be retrained based on further user inputs [0086], and is used to provide recommendations to a user [0093].
Fang is deficient in a number of ways, including at least a failure to teach or suggest at least that the product is a vehicle; and the ability to receive a selection by the user of one or more selected sections of the particular image corresponding to one or more vehicle features of the vehicle, wherein the one or more selected sections are associated with one or more portions of the particular image and selected on the particular image via the user interface, wherein the user preference selection data indicates user feedback associated with the one or more vehicle features, and wherein the user feedback is associated with at least one of comments or audio that is provided via the user interface, and received with the user preference selection data; identify, using a first machine learning model, the one or more vehicle features corresponding to the one or more selected sections; provide the user feedback as input to a second machine learning model, wherein the second machine learning model uses a natural language processing technique to process the user feedback; receive, as output from the second machine learning model, a user preference score corresponding to a user preference level associated with the one or more vehicle features; transmit to the user device and based on the user preference data, search results including vehicle data corresponding to one or more vehicles; re-train the second machine learning model based on other feedback associated with the search results; and utilize the re-trained second machine learning model for subsequent requests.
Koister teaches a recommender system for recommending items [Abstract] based on user inputted objects into the system [0092], and determining vehicles with similar features to the inputted object [0040]. This includes specific algorithms for determining similarity between vehicles to generate a list of recommendations [0068].
Koister is deficient in a number of ways, including at least a failure to teach or suggest at least the ability to receive a selection by the user of one or more selected sections of the particular image corresponding to one or more vehicle features of the vehicle, wherein the one or more selected sections are associated with one or more portions of the particular image and selected on the particular image via the user interface, wherein the user preference selection data indicates user feedback associated with the one or more vehicle features, and wherein the user feedback is associated with at least one of comments or audio that is provided via the user interface, and received with the user preference selection data; identify, using a first machine learning model, the one or more vehicle features corresponding to the one or more selected sections; provide the user feedback as input to a second machine learning model, wherein the second machine learning model uses a natural language processing technique to process the user feedback; receive, as output from the second machine learning model, a user preference score corresponding to a user preference level associated with the one or more vehicle features; transmit to the user device and based on the user preference data, search results including vehicle data corresponding to one or more vehicles; re-train the second machine learning model based on other feedback associated with the search results; and utilize the re-trained second machine learning model for subsequent requests.
Bissex teaches a recommendation method for consumer products [Abstract] that receives user feedback and generates customer preferences based on the feedback [0050]. Recommendations are provided based on the stored preferences [0023] This feedback data – on products tracked on a blockchain, rather than those recommended to a user – may be used to retrain the ML model [0050].
Bissex is deficient in a number of ways, including at least a failure to teach or suggest at least that the product is a vehicle; and the ability to receive a selection by the user of one or more selected sections of the particular image corresponding to one or more vehicle features of the vehicle, wherein the one or more selected sections are associated with one or more portions of the particular image and selected on the particular image via the user interface, wherein the user preference selection data indicates user feedback associated with the one or more vehicle features, and wherein the user feedback is associated with at least one of comments or audio that is provided via the user interface, and received with the user preference selection data; identify, using a first machine learning model, the one or more vehicle features corresponding to the one or more selected sections; provide the user feedback as input to a second machine learning model, wherein the second machine learning model uses a natural language processing technique to process the user feedback; receive, as output from the second machine learning model, a user preference score corresponding to a user preference level associated with the one or more vehicle features; transmit to the user device and based on the user preference data, search results including vehicle data corresponding to one or more vehicles; re-train the second machine learning model based on other feedback associated with the search results; and utilize the re-trained second machine learning model for subsequent requests.
Ultimately, the particular combination of limitations as claimed, is not anticipated nor rendered obvious in view of the cited references, and the totality of the prior art. While certain references may disclose more general concepts and parts of the claim, the prior art available does not specifically disclose the particular combination of these limitations.
The references, however, do not teach or suggest, alone or in combination the claimed invention. Examiner emphasizes that the prior art/additional art would only be combined and deemed obvious based on knowledge gleaned from the applicant’s disclosure. Such a reconstruction is improper (i.e. hindsight reasoning). See In re McLaughlin, 443 F.2d 1392, 170 USPQ 209 (CCPA 1971).
The Examiner further emphasizes the claims as a whole and hereby asserts that the totality of the evidence fails to set forth, either explicitly or implicitly, an appropriate rationale for further modification of the evidence at hand to arrive at the claimed invention. The combination of features as claimed would not be obvious to one of ordinary skill in the art as combining various references from the totality of evidence to reach the combination of features as claimed would be a substantial reconstruction of Applicant’s claimed invention relying on improper hindsight bias.
It is thereby asserted by Examiner that, in light of the above and further deliberation over all of the evidence at hand, that the claims are allowable over prior art (though rejected under 35 USC §101) as the evidence at hand does not anticipate the claims and does not render obvious any further modification of the references to a person of ordinary skill in the art.
Response to Arguments
Applicant's arguments filed 12/23/2025 have been fully considered but are not persuasive.
Claim Rejections – 35 USC § 101
Applicant argues that “the Specification identifies a concrete technical problem, that conventional search systems cannot provide relevant results for product features that may be difficult to describe …leading to wasted processing and network resources.” Applicant alleges that “the claimed system overcomes this by allowing users to select image sections corresponding to features via a specialized user interface, provide feedback via the specialized user interface, and process both image and feedback data using specialized machine learning models to derive preference scores that are used during search.” Applicant argues that claim 1 therefore “provides a practical application of (1) a specialized user interface for selecting one or more selected sections of a particular image corresponding to a vehicle, (2) a designated interface for inputting user feedback (at least one of audio or text) related to the one or more selected sections, (3) a specialized process of processing both the selected sections of the image and the feedback using specialized machine learning models (the claimed first and second machine learning models) to derive preference scores that are used during performing a search. This goes beyond Methods of Organizing Human Activity…and at least integrates the alleged judicial exception into a practical application,” as well as “amounts to significantly more than the alleged abstract idea … because of the specialized features explained above.”
Examiner disagrees. The claims, rather than providing a specialized user interface and specialized processing method, merely provide the abstract idea along with additional elements recited at a high level of generality, such as the user interfaces and models. These additional elements amount to mere instructions to apply the abstract idea to a technological environment, and do not integrate the abstract idea into a practical application [MPEP 2106.05(f)]. The alleged improvement is at best a business improvement stemming solely from the abstract idea itself, e.g. the steps of circling an element on a displayed vehicle image and providing feedback, and not a technological improvement rooted in computer technology.
Applicant argues that claims “10 and 16, as amended, recites similar features,” such that claims 1, 2, 4, 5, 7-11, 13-17, and 19-24 are eligible for the above reasons.
Examiner respectfully disagrees for the reasons addressed above.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to THOMAS JOSEPH SULLIVAN whose telephone number is (571)272-9736. The examiner can normally be reached on Mon - Fri 8-5 PT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Marissa Thein can be reached on 571-272-6764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov.
Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/T.J.S./Examiner, Art Unit 3689
/MARISSA THEIN/Supervisory Patent Examiner, Art Unit 3689