Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/8/26 has been entered. Claims 1-20 are presented for examination.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
Claim 20 recites “a body position proximity” of an avatar. A body position entails the orientation of the body (e.g. leaning forward, backward etc…). There is no support in the specification for the limitation.
At best, the specification describes in [0033] “ The interaction data may additionally or alternatively include user gaze fixation time (e.g., how long avatar 113 looks at an item or aspect of virtual warehouse 112 and/or virtual object 114), user gaze fixation value (e.g., how many times avatar113 looks at an item or aspect of virtual warehouse 112 and/or virtual object 114), user proximity time (e.g., how long avatar 113 spends near at an item or aspect of virtual warehouse 112 and/or virtual object 114), user proximity value (e.g., how many times avatar 113 is near at an item or aspect of virtual warehouse 112 and/or virtual object 114), user interaction time (e.g., how long avatar 113 interacts with an item or aspect of virtual warehouse 112 and/or virtual object 114), user interaction value (e.g., how many times avatar 113 interacts with an item or aspect of virtual warehouse 112 and/or virtual object 114), etc. For example, if avatar 113 repeatedly stands near motorcycles for extended amounts of time, the motorcycles may be determined to be a feature of interest to the user 102.”.
Appropriate action is required.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-6, 11-15, 20 are rejected under 35 U.S.C. 103 as being unpatentable over Piramuthu et al. (US 20190205962 A1) in view of Bromenshenkel et al. (US 20100121810 A1), in further view of ZHU (CN 103853720 A).
Re-claims 1, 4, Piramuthu et al. teach a method for dynamically modifying a virtual warehouse, the method comprising:
--determining a first feature of interest of a virtual object of the virtual warehouse based on a first passive user interaction [..] with the virtual object;
--generating a first search query based on the determined first feature of interest;
(see e.g. [0056] FIG. 4 depicts a procedure 400 in an example implementation of computer vision and active image search. User interaction with a user interface that outputs a live feed of digital images is monitored (block 402). A user, for instance, may view a live feed of digital images taken of a physical environment of the user 108 and the computing device 102. In this way, a user may view objects of interest as well as characteristics of those objects. The camera platform manager module 116 may monitor the user 108′s interaction with (e.g., viewing of) the live feed.
[0022] In some aspects, the described system performs searches that leverage multiple digital images as part of a search query to locate digital content of interest, e.g., listings of particular goods and services.
--identifying one or more first search results that corresponds to the first search query, the one or more first search results including at least one object with at least one feature;
(see e.g. [0059] A search is then performed, either locally by the computing device 102 or remotely by the service provider system 104. A search result is then output in the user interface based on the search query (block 410) that includes digital content located as part of the search, e.g., product listings, digital images, and so forth.
[0097] Search results that include at least one identified item having a similar pattern are received (block 908). By way of example, the other computing device 602 receives the query response data 634, which includes at least one item having a similar pattern as identified by the listing system 618. Digital content depicting at least one of the identified items is presented via the user interface (block 910). By way of example, the user interface 700 presents images 708, which in this example represent items identified by the listing system 618.)
Piramuthu et al. do not explicitly teach the following limitations.
However, Bromenshenkel et al. teach ---determining a first feature of interest of a virtual object of the virtual warehouse based a first passive user interaction of an avatar with the virtual object,
(see e.g. [0047] The method 500 begins at step 510, where a browser application detects a user interaction with a virtual object. For example, client application 109 (shown in FIG. 1) may detect a user interacting with a virtual object 132 included in virtual world 130. Such an user interaction may include, e.g., touching an object, picking up an object, looking at an object, walking into a virtual building, operating an object, etc.
[0048]. For example, assume the user of client application 109 is interacting with a virtual object 132 representing a portable music player. In this case, client application 109 may store data describing the virtual object 132 in client profile 106 (e.g., object owner, location, dimensions, color, type, etc)
[0027] Further, client application 109 may be configured to generate and display a visual representation of the user within the immersive environment, generally referred to as an avatar.
[0029] a variety of devices configured to present the virtual world to the user and to translate movement/motion or other actions of the user into actions performed by the avatar representing that user within virtual world 130.)
--the first passive user interaction comprising a user proximity value of the avatar in comparison to the virtual object;
(see e.g. [0031] Such characteristics may be used to determine how much the user interacted with a virtual object 132, and thus to determine how much interest the user has in that particular virtual object 132. For example, a virtual object 132 that the user has devoted a great deal of attention to (e.g., carried, used, operated, looked at closely and for an extend periods of time, etc.) may be determined to be of more interest to the user than an object which the user looked at briefly while "walking" through a virtual room. Further, the records included in the client profile 106 may describe a virtual object 132 that has been explicitly designated by the user as an object of interest. For example, while in the virtual world 130, the user may perform a command indicating that a particular virtual object 132 is an object of interest.
[0049] Optionally, the virtual objects 132 of interest to the user may be determined by evaluating whether the user's interest exceeds a predefined level of interest (e.g., the user interacted with a given virtual object 132 for at least 10 seconds, etc.).
[0033] The user of client application 109 may then be presented with virtual objects 132 that match the objects of interest to the user. Virtual objects 132 may include, for example, a virtual representation of a car that the user has read about in browser application 119, a virtual store (e.g., virtual store 220 shown in FIG. 2) selling goods or services that the user has searched for in browser application 119, and the like. Optionally, objects of interest may be determined by evaluating whether the user's interest exceeds a predefined level of interest (e.g., the user viewed at least two web sites related to a given object in the user's last web browsing session, etc.).
--modifying the virtual warehouse; and--causing a user interface to output the modified virtual warehouse.
(see e.g. [0035] In the event of multiple matching virtual objects 132, client application 109 may present the matching virtual objects 132 in list form (e.g., a list of object names, a list of object images, etc.). In such a list, clicking on an object name or image may cause the user to be teleported to the location of the corresponding virtual object 132. Further, the list may be sorted according to how closely each virtual object 132 matches the user's web profile 116. Alternatively, client application 109 may present multiple matching virtual objects 132 in map form (e.g., a map of virtual world 130 including visual markers indicating the locations of matching virtual objects 132.).
[0036] In one embodiment, client application 109 may be configured to present a user with web content based on client profile 106. More specifically, client application 109 may initiate browser application 119 in order to display web content (e.g., web sites, search results, etc.) related to virtual objects 132 that the user has interacted with in client application 109.
[0050] At step 560, web content relevant to virtual objects 132 of interest to the user may be identified. For example, client application 109 may be configured to identify any web content (e.g., web sites, web pages, discussion boards, blogs, portals, etc.) that may be relevant to virtual objects 132 that the user interacted with included in virtual world 130. In some cases, a virtual object 132 may be considered to be of interest to the user based on the interaction with the virtual object, including the type of interaction, duration of interaction, etc. For example, assume client profile 106 indicates that the user spent a given amount of time interacting with a virtual representation of a particular type of car while using client application 109. In this case, client application 109 may determine that this type of car is of interest to the user, and may thus search for and identify any web content related to this type of car (e.g., web sites of car dealers, web pages including car reviews, etc.).
Bromenshenkel et al. also teach ---generating a first search query based on the determined first feature of interest; --identifying one or more first search results that corresponds to the first search query, the one or more first search results including at least one object with at least one feature;
(see e.g. [0050] In this case, client application 109 may determine that this type of car is of interest to the user, and may thus search for and identify any web content related to this type of car (e.g., web sites of car dealers, web pages including car reviews, etc.).
--modifying the virtual warehouse to include the at least one object and the at least one counterexample;
(see e.g. [0027] The client application 109 may also be configured to generate and display the immersive environment to the user and to transmit the user's desired actions to virtual world 130 on server 120. Such a display may include content from the virtual world determined from the user's line of sight at any given time.
4. The method of claim 1, wherein the modifying the virtual warehouse based on the one or more first search results includes modifying an inventory of the virtual warehouse to include a higher proportion of objects having the determined first feature of interest.
(see e.g. [0035] In the event of multiple matching virtual objects 132, client application 109 may present the matching virtual objects 132 in list form (e.g., a list of object names, a list of object images, etc.). In such a list, clicking on an object name or image may cause the user to be teleported to the location of the corresponding virtual object 132. Further, the list may be sorted according to how closely each virtual object 132 matches the user's web profile 116. Alternatively, client application 109 may present multiple matching virtual objects 132 in map form (e.g., a map of virtual world 130 including visual markers indicating the locations of matching virtual objects 132.).
Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Piramuthu et al., and include the steps cited above, as taught by Bromenshenkel et al. teach., in order to use characteristics of user interactions with virtual objects to determine related content.(see e.g. abstract).
Piramuthu et al., in view of Bromenshenkel et al. do not teach the following limitation as claimed.
However, ZHU teaches ---the one or more first search results including at least one object with at least one counterexample absent the at least one feature;
(see e.g. [0033] The embodiment of the invention claims a network based on user interest sensitive information monitoring method, referring to FIG. 1, based on user interest sensitive network information monitoring method comprises the following steps:
[0045] Step S130 is a searching step, according to keyword, related keyword and related keyword for matching search based on the internet to obtain the marking webpage, webpage to be marked comprises candidate positive example and candidate counterexample, candidate positive example and candidate counterexample are respectively obtained by search related keyword and related keywords. candidate positive example is web page cared by the user and meets the demand of the user, the candidate counterexample is the so-called "error information", is not coincident with the user requirement.)
--modifying the virtual warehouse to include the at least one object and the at least one counterexample;
(see e.g. [0025] in one embodiment, further comprises a positive-negative example of adjusting module, the positive and negative example adjustment module using SVM classifier training method, on basis of user requirement, continuously adjusting the user annotation data set the number of positive and negative
[0049] Step S160 is evaluating step, using SVM (support vector machines, support vector machine) classifier training method, all sample web page and selected from the candidate counterexample as the testing set, all the sample web page as a training set from the candidate to be marked of web page classification accuracy to test to obtain the accuracy of classification and pre-set threshold value)
Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Piramuthu et al., in view of Bromenshenkel et al., and include the steps cited above, as taught by ZHU et al. , in order to provide a sensitive network information monitoring system based on user attention degree (see e.g. [0028]).
Claims 1-6, 11-15, 20 are rejected under 35 U.S.C. 103 as being unpatentable over Piramuthu et al. (US 20190205962 A1) in view of Bromenshenkel et al. (US 20100121810 A1), in further view of ASTRAKHANTSEV et al. (US 20210133229 A1) .
Re-claims 1, 4, Piramuthu et al. teach a method for dynamically modifying a virtual warehouse, the method comprising:
--determining a first feature of interest of a virtual object of the virtual warehouse based on a first passive user interaction [..] with the virtual object;
--generating a first search query based on the determined first feature of interest;
(see e.g. [0056] FIG. 4 depicts a procedure 400 in an example implementation of computer vision and active image search. User interaction with a user interface that outputs a live feed of digital images is monitored (block 402). A user, for instance, may view a live feed of digital images taken of a physical environment of the user 108 and the computing device 102. In this way, a user may view objects of interest as well as characteristics of those objects. The camera platform manager module 116 may monitor the user 108′s interaction with (e.g., viewing of) the live feed.
[0022] In some aspects, the described system performs searches that leverage multiple digital images as part of a search query to locate digital content of interest, e.g., listings of particular goods and services.
--identifying one or more first search results that corresponds to the first search query, the one or more first search results including at least one object with at least one feature;
(see e.g. [0059] A search is then performed, either locally by the computing device 102 or remotely by the service provider system 104. A search result is then output in the user interface based on the search query (block 410) that includes digital content located as part of the search, e.g., product listings, digital images, and so forth.
[0097] Search results that include at least one identified item having a similar pattern are received (block 908). By way of example, the other computing device 602 receives the query response data 634, which includes at least one item having a similar pattern as identified by the listing system 618. Digital content depicting at least one of the identified items is presented via the user interface (block 910). By way of example, the user interface 700 presents images 708, which in this example represent items identified by the listing system 618.)
Piramuthu et al. do not explicitly teach the following limitations.
However, Bromenshenkel et al. teach ---determining a first feature of interest of a virtual object of the virtual warehouse based a first passive user interaction of an avatar with the virtual object,
(see e.g. [0047] The method 500 begins at step 510, where a browser application detects a user interaction with a virtual object. For example, client application 109 (shown in FIG. 1) may detect a user interacting with a virtual object 132 included in virtual world 130. Such an user interaction may include, e.g., touching an object, picking up an object, looking at an object, walking into a virtual building, operating an object, etc.
[0048]. For example, assume the user of client application 109 is interacting with a virtual object 132 representing a portable music player. In this case, client application 109 may store data describing the virtual object 132 in client profile 106 (e.g., object owner, location, dimensions, color, type, etc)
[0027] Further, client application 109 may be configured to generate and display a visual representation of the user within the immersive environment, generally referred to as an avatar.
[0029] a variety of devices configured to present the virtual world to the user and to translate movement/motion or other actions of the user into actions performed by the avatar representing that user within virtual world 130.)
--the first passive user interaction comprising a user proximity value of the avatar in comparison to the virtual object;
(see e.g. [0031] Such characteristics may be used to determine how much the user interacted with a virtual object 132, and thus to determine how much interest the user has in that particular virtual object 132. For example, a virtual object 132 that the user has devoted a great deal of attention to (e.g., carried, used, operated, looked at closely and for an extend periods of time, etc.) may be determined to be of more interest to the user than an object which the user looked at briefly while "walking" through a virtual room. Further, the records included in the client profile 106 may describe a virtual object 132 that has been explicitly designated by the user as an object of interest. For example, while in the virtual world 130, the user may perform a command indicating that a particular virtual object 132 is an object of interest.
[0049] Optionally, the virtual objects 132 of interest to the user may be determined by evaluating whether the user's interest exceeds a predefined level of interest (e.g., the user interacted with a given virtual object 132 for at least 10 seconds, etc.).
[0033] The user of client application 109 may then be presented with virtual objects 132 that match the objects of interest to the user. Virtual objects 132 may include, for example, a virtual representation of a car that the user has read about in browser application 119, a virtual store (e.g., virtual store 220 shown in FIG. 2) selling goods or services that the user has searched for in browser application 119, and the like. Optionally, objects of interest may be determined by evaluating whether the user's interest exceeds a predefined level of interest (e.g., the user viewed at least two web sites related to a given object in the user's last web browsing session, etc.).
--modifying the virtual warehouse; and--causing a user interface to output the modified virtual warehouse.
(see e.g. [0035] In the event of multiple matching virtual objects 132, client application 109 may present the matching virtual objects 132 in list form (e.g., a list of object names, a list of object images, etc.). In such a list, clicking on an object name or image may cause the user to be teleported to the location of the corresponding virtual object 132. Further, the list may be sorted according to how closely each virtual object 132 matches the user's web profile 116. Alternatively, client application 109 may present multiple matching virtual objects 132 in map form (e.g., a map of virtual world 130 including visual markers indicating the locations of matching virtual objects 132.).
[0036] In one embodiment, client application 109 may be configured to present a user with web content based on client profile 106. More specifically, client application 109 may initiate browser application 119 in order to display web content (e.g., web sites, search results, etc.) related to virtual objects 132 that the user has interacted with in client application 109.
[0050] At step 560, web content relevant to virtual objects 132 of interest to the user may be identified. For example, client application 109 may be configured to identify any web content (e.g., web sites, web pages, discussion boards, blogs, portals, etc.) that may be relevant to virtual objects 132 that the user interacted with included in virtual world 130. In some cases, a virtual object 132 may be considered to be of interest to the user based on the interaction with the virtual object, including the type of interaction, duration of interaction, etc. For example, assume client profile 106 indicates that the user spent a given amount of time interacting with a virtual representation of a particular type of car while using client application 109. In this case, client application 109 may determine that this type of car is of interest to the user, and may thus search for and identify any web content related to this type of car (e.g., web sites of car dealers, web pages including car reviews, etc.).
Bromenshenkel et al. also teach ---generating a first search query based on the determined first feature of interest; --identifying one or more first search results that corresponds to the first search query, the one or more first search results including at least one object with at least one feature;
(see e.g. [0050] In this case, client application 109 may determine that this type of car is of interest to the user, and may thus search for and identify any web content related to this type of car (e.g., web sites of car dealers, web pages including car reviews, etc.).
--modifying the virtual warehouse to include the at least one object and the at least one counterexample;
(see e.g. [0027] The client application 109 may also be configured to generate and display the immersive environment to the user and to transmit the user's desired actions to virtual world 130 on server 120. Such a display may include content from the virtual world determined from the user's line of sight at any given time. )
4. The method of claim 1, wherein the modifying the virtual warehouse based on the one or more first search results includes modifying an inventory of the virtual warehouse to include a higher proportion of objects having the determined first feature of interest.
(see e.g. [0035] In the event of multiple matching virtual objects 132, client application 109 may present the matching virtual objects 132 in list form (e.g., a list of object names, a list of object images, etc.). In such a list, clicking on an object name or image may cause the user to be teleported to the location of the corresponding virtual object 132. Further, the list may be sorted according to how closely each virtual object 132 matches the user's web profile 116. Alternatively, client application 109 may present multiple matching virtual objects 132 in map form (e.g., a map of virtual world 130 including visual markers indicating the locations of matching virtual objects 132.).
Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Piramuthu et al., and include the steps cited above, as taught by Bromenshenkel et al. teach., in order to use characteristics of user interactions with virtual objects to determine related content.(see e.g. abstract).
Piramuthu et al., in view of Bromenshenkel et al. do not teach the following limitation as claimed.
However, ASTRAKHANTSEV et al. teach ---the one or more first search results including at least one object with at least one counterexample absent the at least one feature;
(see e.g. [0040] FIG. 3 illustrates an example 300 user interface 302 and a set of results images 306, 308, 310, returned in response to a query 304 according to some aspects of the present disclosure. In this embodiment, a set of results images 306, 308, 310 are displayed in response to a user query 304 in a user interface 302. The results images 306, 308, 310 are part of an image row as explained in greater detail herein. The results images in an image row (e.g., 306, 308, 310) are visually similar in constant attributes such as item view angle, item color, and image background, and are different in selected varying attributes, such as SUV model.
[0048] For a more focused query, such as “Mazda® SUVs” the images can be of different models of Mazda® SUVs. Furthermore, images can comprise associated attributes, which can be indicated in associated metadata. Thus, an attribute value showing the manufacture, an attribute value showing the model, and so forth can be stored in metadata associated with a particular image. Any number and/or type of attributes can be stored in the metadata, depending on the image.
[0021] In other use cases the goal is to present a different combination of similarities and/or differences. For example, in a second use case the images can be of the same item with different attributes. For example, the same model SUV with similar view aspects, but with different colors so the user can compare color and decide which they like best. Thus, in general, embodiments of the present disclosure select images with a first set of constant attributes and a second set of varying aspects. In this disclosure, a set is one or more items and a subset is all or less than all items in a set.
[0115] Thus, if the user searches for “Model Y SUVs,” it can be interpreted that the user wants to see various aspects of Model Y SUVs. The attribute to vary, in this representative example, color can be inferred by the user search history, by user input, by user interaction with presented images, by popularity, and/or in some other fashion.
[0046] The search engine 410 and one or more data stores 412 operate in the manner usually associated with search engines/data stores. The search engine 410 receives a query from a user, such as via the user interacting with a web browser or other application. The search engine 410 retrieves search results that are relevant to the query. Included in these search results can be one or more images that are determined to be relevant to the search query. In the context of this disclosure, relevant means results that the search engine determines can be presented to the user as being responsive to the entered query.
[0136] Validation data 1214 is likewise image pairs that are similar and/or dissimilar. )
--modifying the virtual warehouse to include the at least one object and the at least one counterexample;
(see e.g. [0023] Embodiments of the present disclosure can be integrated into a search system or search engine or can be implemented as “post processing” on the images that the search system has identified as relevant to a user query.
[0032] In addition, in some implementations, a user device can be configured to transmit data captured locally during use of relevant application(s) to the cloud or the local ML program and provide supplemental training data that can serve to fine-tune or increase the effectiveness of the MLA. The supplemental data can also be used to facilitate identification of contents and/or to increase the training set for future application versions or updates to the current application.)
Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Piramuthu et al., in view of Bromenshenkel et al., and include the steps cited above, as taught by ASTRAKHANTSEV et al., in order to provide clues as to what might be of more interest to a user(see e.g. [0106]), and provide a feedback type process where parameters (also called weights) in the model are successively adjusted until a desired level of accuracy is achieve [0131].
Re-claim 2, Piramuthu et al. teach the method of claim 1, wherein, in addition to the user proximity value, the first passive user interaction includes one or more of a user gaze fixation time, a user gaze fixation value, a user proximity time, a user interaction time, or a user interaction value.
(see e.g. [0058] A characteristic is inferred from the selected digital image through comparison with at least one other digital image of the live feed (block 406). As part of the user interaction, for instance, a user may “look around” a physical environment. As part of this, the user may then focus or “zoom in” or “zoom out” on a particular object, such as to view an overall shape of the object, a pattern, texture, or material of the object, and so on.
Claim 4. A method as described in claim 1, wherein inferring the characteristic is based on an amount of zoom of the selected digital image and the at least one other digital image.)
Re-claim 3, Piramuthu et al. do not teach the following limitations
However, Bromenshenkel et al. teach the method of claim 2, wherein the determining the first feature of interest of a virtual object of the virtual warehouse further comprises: determining that the one or more of the user gaze fixation time, the user gaze fixation value, the user proximity time, the user interaction time, or the user interaction value are above a predetermined threshold.
(see e.g. [0006] collecting data describing interactions between an avatar and virtual objects in a virtual environment, the avatar being manipulated by a user; characterizing a level of interest of the user in the virtual objects using the collected data; determining web content corresponding only to those virtual objects for which the level of interest exceeds a predetermined level;
[0049] Optionally, the virtual objects 132 of interest to the user may be determined by evaluating whether the user's interest exceeds a predefined level of interest (e.g., the user interacted with a given virtual object 132 for at least 10 seconds, etc.).
[0031] . Further, each interaction record may include characteristics of the user interaction itself (e.g., type of interaction, date/time, duration, etc.) Such characteristics may be used to determine how much the user interacted with a virtual object 132, and thus to determine how much interest the user has in that particular virtual object 132. For example, a virtual object 132 that the user has devoted a great deal of attention to (e.g., carried, used, operated, looked at closely and for an extend periods of time, etc.) may be determined to be of more interest to the user than an object which the user looked at briefly while "walking" through a virtual room.
[0048] Further, client application 109 may store data describing the user's interaction with virtual object 132 in client profile 106. Such data may include, e.g., a degree and type of interaction, how long the user interacted with the object, what the user was doing at the time of the interaction, a location of the interaction, etc.)
Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Piramuthu et al., and include the steps cited above, as taught by Bromenshenkel et al., in order to determine virtual objects of interest to the user (see e.g. [0049]).
Re-claims 5, 6, Piramuthu et al., in view of Bromenshenkel et al., do not teach the following limitations.
However, ASTRAKHANTSEV et al. teach the method of claim 1, wherein the modifying the virtual warehouse based on the one or more first search results includes modifying an inventory of the virtual warehouse to include the at least one counterexample to use as a control feature of interest.
(see e.g. [0021] In other use cases the goal is to present a different combination of similarities and/or differences. [0023] Embodiments of the present disclosure can be integrated into a search system or search engine or can be implemented as “post processing” on the images that the search system has identified as relevant to a user query. For example, in a second use case the images can be of the same item with different attributes. For example, the same model SUV with similar view aspects, but with different colors so the user can compare color and decide which they like best.
[0032] In addition, in some implementations, a user device can be configured to transmit data captured locally during use of relevant application(s) to the cloud or the local ML program and provide supplemental training data that can serve to fine-tune or increase the effectiveness of the MLA. The supplemental data can also be used to facilitate identification of contents and/or to increase the training set for future application versions or updates to the current application.)
6. The method of claim 5, further comprising: determining a user’s interest in the at least one counterexample to gauge a correctness of the determined first feature of interest,
(see e.g. [0131] . In some instances, the training is supervised meaning that the training utilizes annotated input data. The machine learning tool appraises the value of the features 1106 as they correlate to the training data 1104. The result of the training is the trained machine learning model 1116.
[0132] At this point the machine learning model 1116 is trained but is unvalidated. The model can be used directly or can be validated using a validation process. The validation process comprises sending validation data 1114 which is the same as data expected to be processed by the machine learning model with annotations that indicate the output that would be expected from the model. The actual output values from the model can be compared to the image annotation to see if the model produced the expected answer. The model can be said to be validated once a certain percentage of correct assessments are produced. For models that do not reach the desired level of correctness, retraining using more data may be in order.)
wherein the user’s interest is determined based on one or more of a further user gaze fixation time, a further user gaze fixation value, a further user proximity time, a further user proximity value, a further user interaction time, or a further user interaction value.
(see e.g. [0105] FIG. 8 illustrates an example 800 image selection process according to some aspects of the present disclosure. In some embodiments, it can be desirable to take into account a user's search history and/or click history to help identify which images a user is more likely to be interested in. FIG. 8 illustrates a mechanism for incorporating search/click history into the image selection process in order to have greater personalization to a user.)
Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Piramuthu et al., in view of Bromenshenkel et al., and include the steps cited above, as taught by ASTRAKHANTSEV et al., in order to provide clues as to what might be of more interest to a user(see e.g. [0106]), and provide a feedback type process where parameters (also called weights) in the model are successively adjusted until a desired level of accuracy is achieve [0131]).
Claim 11 recites similar limitations as claim 1 and is therefore rejected under the same arts and rationale.
Furthermore, Bromenshenkel et al. anticipate --the system comprising: a virtual reality headset;
determining, via the virtual reality headset, a first feature of interest of a virtual object of the virtual warehouse based on a first passive user interaction with the virtual object
(see e.g. [0029] The user may view the virtual world using a display device 140, such as an LCD or CRT monitor display, and interact with the client application 109 using input devices 150. Further, in one embodiment, the user may interact with client application 109 and virtual world 130 using a variety of virtual reality interaction devices 160. For example, the user may don a set of virtual reality goggles that have a screen display for each lens. Further, the goggles could be equipped with motion sensors that cause the view of the virtual world presented to the user to move based on the head movements of the individual. As another example, the user could don a pair of gloves configured to translate motion and movement of the user's hands into avatar movements within the virtual reality environment. Of course, embodiments of the invention are not limited to these examples and one of ordinary skill in the art will readily recognize that the invention may be adapted for use with a variety of devices configured to present the virtual world to the user and to translate movement/motion or other actions of the user into actions performed by the avatar representing that user within virtual world 130.
The Examiner notes that Bromenshenkel et al. teach the user may interact with client application 109 and virtual world 130 using a variety of virtual reality interaction devices 160. Although examples of virtual reality goggles, and virtual gloves are listed, Bromenshenkel et al. specifically teach embodiments of the invention are not limited to these examples (see e.g. [0029]).
Thus, the simple substitution of one known element for another producing a predictable result renders the claim obvious. (see KSR rationale B).
Claim 12 recites similar limitations as claim 2 and is therefore rejected under the same arts and rationale.
Claim 13 recites similar limitations as claim 3 and is therefore rejected under the same arts and rationale.
Claim 14 recites similar limitations as claim 4 and is therefore rejected under the same arts and rationale.
Claim 15 recites similar limitations as claim 5 and is therefore rejected under the same arts and rationale.
Claim 20 recites similar limitations as claims 2, 6 and is therefore rejected under the same arts and rationale.
Furthermore, ASTRAKHANTSEV et al. teach comparing the potential interest in the feature with the potential interest in the counterexample.
(see e.g. [0037] For example, depending on the use case, it may be beneficial to present images that are visually similar, or present images that cover a range of attribute values. For example, if the user's purpose is to compare one model of Mazda® SUV to another to identify differences in the model year, the user's task would be made easier by presenting images of different Mazda® SUVs while making sure the vehicles are the same or similar color, have the same or similar backgrounds, have the same view aspect, and so forth. Thus, in one use case, images that are visually similar would be useful. Other use cases will be made easier with a different combination of similarities and/or differences as explained herein.
[0115] In this instance, the item descriptor that will be used for the process can be selected based on what is in the query, such as was previously described above. Thus, if the user searches for “Model Y SUVs,” it can be interpreted that the user wants to see various aspects of Model Y SUVs. The attribute to vary, in this representative example, color can be inferred by the user search history, by user input, by user interaction with presented images, by popularity, and/or in some other fashion.
[0129] The machine learning models utilize features such as the attributes for which values are to be recognized 1106 for analyzing the data to generate assessments 1122. A feature is an individual measurable property of a phenomenon being observed. The concept of a feature is related to that of an explanatory variable used in statistical techniques such as linear regression. Choosing informative, discriminating, and independent features is important for effective operation of the MLP in pattern recognition, classification, and regression. Features may be of different types.)
Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Piramuthu et al., in view of Bromenshenkel et al., and include the steps cited above, as taught by ASTRAKHANTSEV et al. because choosing informative, discriminating, and independent features is important for effective operation of the MLP in pattern recognition, classification, and regression. Features may be of different types (see e.g. [0129]).
Claims 7-9, 16-18 are rejected under 35 U.S.C. 103 as being unpatentable over Piramuthu et al. (US 20190205962 A1) in view of Bromenshenkel et al. (US 20100121810 A1), in view of ASTRAKHANTSEV et al. (US 20210133229 A1) in further view of KURALENOK et al. (US 20170132323 A1).
Re-claims 7, 8, 9, Piramuthu et al., in view of Bromenshenkel et al., in view of ASTRAKHANTSEV et al. do not teach the limitations as claimed.
However, KURALENOK et al. teach the method of claim 1, further comprising: determining a second feature of interest of the virtual object of the virtual warehouse based on a second passive user interaction with the virtual object; generating a second search query based on the determined first feature of interest and the determined second feature of interest; identifying one or more second search results that corresponds to the second search query, the one or more second search results including a portion of the at least one object with at least one feature, the portion of the at least one object with at least one feature including the determined first feature of interest and the determined second feature of interest; modifying the virtual warehouse based on the one or more second search results; and causing the user interface to output the modified virtual warehouse.
(see e.g. [0013] The method comprises receiving the first search query from an electronic device associated with the user, and, responsive to the first search query, generating a first search query result set. The first search query result set is displayed to the user on a first SERP.
[0015] The first user interest parameter is generated by: i) receiving an indication of a first user interaction with the first search result element on the first search result; ii) determining a first weight for the first search result element based on the first user interaction with the first search result element on the first search result; iii) receiving an indication of a second user interaction with the first search result element on the second search result; iv) determining a second weight for the first search result element based on the second user interaction with the first search result element on the second search result; and v) generating the first user interest parameter based on summing the first weight and the second weight for the first search result element.)
[0098] The nature of the first user interactions is not particularly limited. The user may engage strongly, weakly, or not at all with the first search result element 212 on the SERP 108.
8. The method of claim 7, wherein the modifying the virtual warehouse based on the one or more second search results includes modifying the at least one object with at least one feature such that a higher proportion of the at least one object with at least one feature includes the determined first feature of interest and the determined second feature of interest.
(see e.g. [0016] Next, in accordance with the first broad aspect of the present technology, a second search query is generated. The second search query includes the first search query, the first search result element, and the first user interest parameter as a reformulation of the first search query indicating significance of the first search result element. Responsive to the second search query, a second search query result set is generated, and the second search query result set is displayed to the user, thereby generating the refined SERP.)
9. The method of claim 7, wherein the generating of the second search query further comprises: determining weights for each of the determined first feature of interest and the second feature of interest; and generating the second search query based on the determined first feature of interest, the determined second feature of interest, and the determined weights.
(see e.g. [0014] The first user interest parameter indicates user interest in the first search result element and is a weighted accumulation of user interaction with the first search result element on the first search result and the second search result.
[0015] The first user interest parameter is generated by: i) receiving an indication of a first user interaction with the first search result element on the first search result; ii) determining a first weight for the first search result element based on the first user interaction with the first search result element on the first search result; iii) receiving an indication of a second user interaction with the first search result element on the second search result; iv) determining a second weight for the first search result element based on the second user interaction with the first search result element on the second search result; and v) generating the first user interest parameter based on summing the first weight and the second weight for the first search result element.)
Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Piramuthu et al., in view of Bromenshenkel et al., in view of ASTRAKHANTSEV et al. and include the steps cited above, as taught by KURALENOK et al., in order to reflect the user interest (see e.g. [0009]).
Claim 16 recites similar limitations as claim 7 and is therefore rejected under the same arts and rationale.
Claim 17 recites similar limitations as claim 8 and is therefore rejected under the same arts and rationale.
Claim 18 recites similar limitations as claim 9 and is therefore rejected under the same arts and rationale.
Claims 7-9, 16-18 are rejected under 35 U.S.C. 103 as being unpatentable over Piramuthu et al. (US 20190205962 A1) in view of Bromenshenkel et al. (US 20100121810 A1), in view of ZHU (CN 103853720 A), in further view of KURALENOK et al. (US 20170132323 A1).
Claims 10, 19 are rejected under 35 U.S.C. 103 as being unpatentable over Piramuthu et al. (US 20190205962 A1), in view of Bromenshenkel et al. (US 20100121810 A1), in view of ASTRAKHANTSEV et al. (US 20210133229 A1,) in further view of UNNIKRISHNAN et al. (US 20200265499 A1).
Re-claim 10, Piramuthu et al. teach the method of claim 1, wherein the determining of the first feature of interest includes:
--obtaining passive user interaction data that includes the first passive user interaction with the virtual object; and -- determining, via a trained machine learning model, one or more user preferences based on the passive user interaction data,
(see e.g.[0054] User interaction and capture of the digital images may also be used to infer which characteristics of the digital images are to be used as part of a search to infer a user's intent as part of a search. As shown at the first stage 352 of FIG. 3B, for instance, a digital image 358 is captured of a dress having a pattern. In this example, the digital image 358 includes an entire outline of the dress. Thus, the camera platform manager module 116, through machine learning, may detect that the overall shape of the dress is of interest to a user.)
Piramuthu et al., in view of Bromenshenkel et al., do not teach the following limitations as claimed.
However, UNNIKRISHNAN et al. teach ----wherein the trained machine learning model has been trained based on training passive user interaction data, training feature of interest data, training virtual object data, and training virtual warehouse data, to learn associations between the training passive user interaction data and the training feature of interest data, such that the trained machine learning model is configured to output the first feature of interest in response to input of the passive user interaction data.
(see e.g. [0024] [0025] In some implementations, the vehicle information platform may perform a training operation on the machine learning model with historical data. In some implementations, the historical data may include historical data identifying monthly purchase or lease payment options for vehicles by users, credit ratings of the users, makes of the vehicles, models of the vehicles, years of the vehicles, mileages of the vehicles, prices of vehicles, and/or the like. The vehicle information platform may separate the historical data into a training set, a validation set, a test set, and/or the like. The training set may be utilized to train the machine learning model.
[0023] As further shown in FIG. 1C, and by reference number 130, the vehicle information platform may process the particular vehicle data and profile data associated with the user, with a model, to determine purchase options for the particular vehicle and the user. For example, the model may receive the particular vehicle data and the profile data associated with the user as inputs and may output a recommendation of the purchase options for the particular vehicle and the user based on the inputs. In some implementations, the model may include a machine learning model, such as a pattern recognition model that identifies purchase options for the particular vehicle and the user (e.g., based on the account information of the user, a period of time the particular vehicle has been present on the vehicle lot, and/or the like).
Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Piramuthu et al., in view of Bromenshenkel et al., in view of ASTRAKHANTSEV et al. and include the steps cited above, as taught by UNNIKRISHNAN et al., in order to provide options most relevant to the user and increase efficiency of negotiations (see e.g. [0029]).
Claim 19 recites similar limitations as claim 10 and is therefore rejected under the same arts and rationale.
Claims 10, 19 are rejected under 35 U.S.C. 103 as being unpatentable over Piramuthu et al. (US 20190205962 A1), in view of Bromenshenkel et al. (US 20100121810 A1), in view of ZHU (CN 103853720 A), in further view of UNNIKRISHNAN et al. (US 20200265499 A1).
Response to Arguments
Applicant’s arguments with respect to claims 1-20 have been considered but are moot due to the new rejection. The argued reference Pollack is no longer relied upon. ZHU et al. and UNNIKRISHNAN et al., teach the claimed counterexample feature.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LUNA CHAMPAGNE whose telephone number is (571)272-7177. The examiner can normally be reached M-F 8:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Florian Zeender can be reached at 571 272-6790. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/LUNA CHAMPAGNE/Primary Examiner, Art Unit 3627
March 23, 2026