DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
This action is in response to applicant’s arguments and amendments filed 12/05/2025, which are in response to USPTO Office Action mailed 9/29/2025. Applicant’s arguments have been considered with the results that follow: THIS ACTION IS MADE FINAL.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-2, 5, 7-8, 12, 15 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over BURGESS et al. (US PGPUB No. 2023/0367458; Pub. Date: Nov. 16, 2023) in view of GUPTA et al. (US PGPUB No. 2020/0285667; Pub. Date: Sep. 10, 2020) and AZIMI et al. (US PGPUB No. 2023/0131183; Pub. Date: Apr. 27, 2023).
Regarding independent claim 1,
BURGESS discloses a method, performed at a computer system comprising a processor and a computer-readable medium, comprising: receiving a search query via a user interface of a device associated with a user of an online system, wherein the search query is a free text that includes one or more words; See Paragraph [0263], (Disclosing a system for providing and updating user interfaces for search functions. A user may provide an input to a search interface indicating an intent to perform a search. Note [0034] wherein the system is embodied as a digital assistant configured to interpret natural language input in spoken and/or textual format to infer user intent. FIGs. 8G-8N illustrates a graphical user interface comprising affordance 814a representing a search bar wherein a user may provide a textual input, i.e. a method, performed at a computer system comprising a processor and a computer-readable medium, comprising: receiving a search query via a user interface of a device associated with a user of an online system (e.g. a user may provide a text input to a digital assistant), wherein the search query is a free text that includes one or more words (e.g. the user input comprises natural languge text);)
accessing a search query machine-learning model of the online system, wherein the search query machine-learning model is trained to identify a set of search results matching the search query; See Paragraphs [0230] & [0232], (Natural language processing module 732 may identify an actionable intent or domain based on the user natural language request and generates a structured query to represent the identified actionable intent. Natural language processing module 723 is implemented using machine learning mechanisms to select the one or more candidate actionable intents. Note [0263] wherein a user input may include a search input indicating that a user would like a search to be performed, i.e. accessing a search query machine-learning model of the online system (e.g. natural language processing module 723 is implemented as an ML model that may identify a user intent to perform a search), wherein the search query machine-learning model is trained to identify a set of search results matching the search query (e.g. NLP module 723 may generate a structured query to perform a search task in response to determining a user intent to search);
applying the search query machine-learning model to the search query and user data associated with the user to generate the set of search results including a set of one or more primary matches representing one or more items that are the most relevant to the search query among the set of search results, See Paragraphs [0230] & [0232], (Natural language processing module 732 may identify an actionable intent or domain based on the user natural language request and generates a structured query to represent the identified actionable intent. Natural language processing module 723 is implemented using machine learning mechanisms to select the one or more candidate actionable intents.) See Paragraph [0276], (Search results user interface 818 includes interface element 818B as in FIG. 8J wherein a user has provided a search topic of "Charles Appleseed", i.e. applying the search query machine-learning model to the search query and user data (e.g. Note [0229] wherein natural language processing module 732 may use user-specific information to supplement information in the user input to further define the user intent) associated with the user to generate the set of search results including a set of one or more primary matches representing one or more items that are the most relevant to the search query among the set of search results (e.g. NLP module 723 may determine a user intent to search and subsequently provide search results as in FIG. 8J which provides search results responsive to the user query),
BURGESS does not disclose the step generat[ing] the set of search results including a set of one or more complementary items for complementing the set of one or more primary matches, and a set of one or more functional blocks for filtering the set of search results;
GUPTA discloses the step generat[ing] the set of search results including a set of one or more complementary items for complementing the set of one or more primary matches, See FIG. 2F & Paragraph [0055], (Disclosing a system for providing batch search interfaces that support operations for dynamically generating batch search queries. FIG. 2F illustrates a graphical user interface for displaying search results comprising a plurality of search result groups including controls for displaying a set of complementary results 262 retrieved as part of a search process, i.e. a set of one or more complementary items for complementing the set of one or more primary matches,)
and a set of one or more functional blocks for filtering the set of search results; See Paragraph [0024], (The batch search query interface comprises a search results page including filtering controls that allow a user to retain or remove different selections, i.e. a set of one or more functional blocks for filtering the set of search results;)
BURGESS and GUPTA are analogous art because they are in the same field of endeavor, dynamic search interfaces. It would have been obvious to anyone having ordinary skill in the art before the effective filing date to modify the system of BURGESS to include the method of providing complementary search results and filter options as disclosed by GUPTA. Paragraph [0022] of GUPTA discloses that the batch search interface includes control elements that allow users to manipulate dynamic and visual alterations of the user interface such as providing for simultaneously displaying search results in different portions of a search interface.
BURGESS-GUPTA does not disclose the step of generating result metadata by extracting, from the set of search results, a first density of the set of one or more primary matches in the set of search results and a second density of the set of one or more complementary items in the set of search results;
accessing a layout selection machine-learning model of the online system, wherein the layout selection machine-learning model is trained to identify a layout for a search results user interface at the device associated with the user;
applying the layout selection machine-learning model to the result metadata including information about the first density and information about the second density to identify the layout for the search results user interface;
and causing the device associated with the user to display the set of search results at the search results user interface using the identified layout for the search results user interface.
AZIMI discloses the step of generating result metadata by extracting, from the set of search results, a first density of the set of one or more primary matches in the set of search results and a second density of the set of one or more complementary items in the set of search results; See Paragraphs [0055]-[0056], (Disclosing a system for automatic user interface modification of a user interface. The system initiates an assessment of a user interface for modification by analyzing the individual components of a screen and identifies page attributes such as the presence/absence of components as well as which components have more/less emphasis, page sections such as a buy section, a product description section, etc., i.e. generating result metadata by extracting, from the set of search results, a first density of the set of one or more primary matches in the set of search results and a second density of the set of one or more complementary items in the set of search results (e.g. the assessment may determine whether to emphasize a particular section of the UI based on attributes of the plurality of sections for display. For an e-commerce display page, the assessment may determine that a particular section, such as product section (e.g. a section for primary matches) and a recommendation section (e.g. a section for displaying complementary items), wherein every section is associated with a plurality of attributes such as the presence/absence of other elements such as a product section or buy section, i.e. density information (e.g. information relating to the presence/absence and/or emphasis of a section).)
accessing a layout selection machine-learning model of the online system, wherein the layout selection machine-learning model is trained to identify a layout for a search results user interface at the device associated with the user; See Paragraph [0022], (Analysis and evaluation of the user interface may include prediction of user intent of sequential steps of user interactions with the UI based on database of historic user interactions with the UI and other equivalent UIs using a machine learning algorithm.) See FIG. 3A, (FIG. 3A illustrates the automatic modification process comprising Block 5 of selecting a winning design following the assessment of the user interface elements as described in [0022], i.e. accessing a layout selection machine-learning model of the online system, wherein the layout selection machine-learning model is trained to identify a layout for a search results user interface at the device associated with the user;)
applying the layout selection machine-learning model to the result metadata including information about the first density and information about the second density to identify the layout for the search results user interface; See Paragraph [0058], (The system may employ a detection algorithm that decomposes a product page of an e-commerce service into different sections including sections for product view, buy section, product details, review section, product recommendations, and bottom panel such as those illustrated in FIG. 6. The assessment may determine that an attribute may be modified including an attribute for a recommendations component may include the number of recommendations, the size of the component, and the recommended content itself.) See Paragraphs [0055]-[0056], (The system initiates the assessment by analyzing the individual components of a screen and identifies page attributes such as the presence/absence of components as well as which components have more/less emphasis, page sections such as a buy section, a product description section, etc., i.e. applying the layout selection machine-learning model to the result metadata including information about the first density and information about the second density to identify the layout for the search results user interface (e.g. the assessment may determine whether to emphasize a particular section of the UI based on attributes of the plurality of sections for display. For an e-commerce display page, the assessment may determine that a particular section, such as a recommendation section, may be modified according to attributes such as the presence/absence of other elements such as a product section or buy section, i.e. density information (e.g. information relating to the presence/absence and/or emphasis of a section).)
and causing the device associated with the user to display the set of search results at the search results user interface using the identified layout for the search results user interface. See FIG. 10A, (FIG. 10A illustrates the auto-heal planning and implementation of the healing procedure which determines the winning design to be presented to the user, which is then deployed, i.e. causing the device associated with the user to display the set of search results at the search results user interface using the identified layout for the search results user interface.)
BURGESS, GUPTA and AZIMI are analogous art because they are in the same field of endeavor, dynamic search interfaces. It would have been obvious to anyone having ordinary skill in the art before the effective filing date to modify the system of BURGESS-GUPTA to include the method of automatically modifying a user interface, such as an e-commerce product page, according to metrics of each element of an interface as disclosed by AZIMI. Paragraph [0059] of AZIMI discloses that user interfaces may be dynamically modified based on live user behavior observations and comparative A/B testing in order to automatically create interfaces that enhance the user experience.
Regarding dependent claim 2,
As discussed above with claim 1, BURGESS-GUPTA-AZIMI discloses all of the limitations.
GUPTA further discloses the step wherein applying the search query machine-learning model comprises: identifying the set of search results further including a set of one or more substitute items for substituting the one or more primary matches. See Paragraph [0040], (The batch search system may utilize information about the query text to determine substitute items for the query text, i.e. identifying the set of search results further including a set of one or more substitute items for substituting the one or more primary matches.)
Regarding dependent claim 5,
As discussed above with claim 1, BURGESS-GUPTA-AZIMI discloses all of the limitations.
AZIMI further discloses the step of wherein applying the layout selection machine-learning model comprises: identifying, based on context data associated with the search query and result metadata, a likelihood for conversion by the user for each layout of a plurality of layouts for the search results user interface; See Paragraph [0011], (Potential modifications to the user interface may comprise an evaluation and scoring process which includes prediction of user intent of sequential steps of user interactions with the UI based on a database of historic user interactions with the UI and other equivalent UIs, i.e. identifying, based on context data associated with the search query and result metadata (e.g. modifications to the user interface are made according to user information and UI information. Note [0058] wherein the UI may represent an ecommerce product page wherein the UI may be modified to display additional product recommendations according to the number of recommendations), a likelihood for conversion by the user for each layout of a plurality of layouts for the search results user interface.)
and identifying the layout for the search results user interface that has a highest likelihood for conversion by the user among the plurality of layouts; See Paragraph [0080], (The auto-heal process may determine and deploy the winning versions of each segment of the UI in order to generate the user interface that meets criteria, i.e. identifying the layout for the search results user interface that has a highest likelihood for conversion by the user among the plurality of layouts;)
Regarding dependent claim 7,
As discussed above with claim 5, BURGESS-GUPTA-AZIMI discloses all of the limitations.
BURGESS further discloses the step of generating the context data by extracting, from the search query, a set of one or more features of the search query including at least one of an intent of the user, a specificity of the search query, or a classification category of the search query. See Paragraph [0034], (The digital assistant system may interpret natural language input in spoken and/or textual form to infer a user intent. generating the context data by extracting, from the search query, a set of one or more features of the search query including at least an intent of the user (e.g. the user input request is processed by NLP module 732 to determine an intent).)
Regarding dependent claim 8,
As discussed above with claim 5, BURGESS-GUPTA-AZIMI discloses all of the limitations.
GUPTA discloses the step of generating the context data by retrieving, from a database of the online system, a set of features of a retailer associated with the online system that sells a set of items from the set of search results. See Paragraphs [0019]-[0020], (The batch search system may execute batch search queries for retrieving a plurality of search query results, complementary results and replacement results. The system may detect one or more features of an input which may be automatically detected based on classifiers using an item database taxonomy that identifies a plurality of categories.) See Paragraph [0045], (The batch search interface engine 120 may access a product listing platform and perform a classification operation for organizing products of said platform, i.e. generating the context data by retrieving, from a database of the online system, a set of features of a retailer associated with the online system that sells a set of items from the set of search results (e.g. the batch search interface system obtains information relating to classifications of products of a product catalogue. FIG. 2F illustrates an online storefront listing a plurality of products having associated features).)
Regarding independent claim 12,
The claim is analogous to the subject matter of independent claim 1 directed to a non-transitory, computer readable medium and is rejected under similar rationale.
Regarding dependent claim 15,
The claim is analogous to the subject matter of dependent claim 8 directed to a non-transitory, computer readable medium and is rejected under similar rationale.
Regarding independent claim 20,
The claim is analogous to the subject matter of independent claim 1 directed to a computer system and is rejected under similar rationale.
Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over BURGESS in view of GUPTA and AZIMI as applied to claim 1 above, and further in view of Tendler et al. (US PGPUB No. 2024/0004940; Pub. Date: Jan. 4, 2024).
Regarding dependent claim 3,
As discussed above with claim 1, BURGESS-GUPTA-AZIMI discloses all of the limitations.
BURGESS-GUPTA-AZIMI does not disclose the step wherein applying the layout selection machine-learning model comprises: identifying, based on a number of search results in the set of search results and a set of threshold values, the layout for the search results user interface.
Tendler discloses the step wherein applying the layout selection machine-learning model comprises: identifying, based on a number of search results in the set of search results and a set of threshold values, the layout for the search results user interface. See Paragraph [0204], (Disclosing a system for providing an improved search interface that provides suggested search terms in a variety of categories. The system may provide a search user interface having at least two dynamic contextual categories that a user may interact with. The search user interface may be updated to show suggested search results and may further determine that a number of search results is less than a predetermined quantity, which prompts the system to display the less than the predetermined quantity of search results without display of the refined suggested search terms in the first contextual category, i.e. identifying, based on a number of search results in the set of search results and a set of threshold values, the layout for the search results user interface (e.g. by determining which elements to display and not display).)
BURGESS, GUPTA, AZIMI and Tendler are analogous art because they are in the same field of endeavor, dynamic graphical user interfaces. It would have been obvious to anyone having ordinary skill in the art before the effective filing date to modify the system of BURGESS-GUPTA-AZIMI to include the method of augmenting a display interface based on a number of search results to be displayed as disclosed by Tendler. Paragraph [0005] of Tendler discloses that the improved search interface allows the system to display refined suggested search terms as a user provides initial search inputs in order to improve the user search process
Claim(s) 4, 10-11, 13 and 18-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over BURGESS in view of GUPTA, AZIMI and Tendler as applied to claim 3 above, and further in view of Gordon et al. (US Patent No.: 10,552,183; Date of Patent: Feb. 4, 2020).
Regarding dependent claim 4,
As discussed above with claim 3, BURGESS-GUPTA-AZIMI-Tendler discloses all of the limitations.
BURGESS-GUPTA-AZIMI-Tendler does not disclose the step of collecting feedback data with information about conversion by the user of the set of search results displayed at the search results user interface using the identified layout;
updating the set of threshold values using the collected feedback data.
Gordon discloses the step of collecting feedback data with information about conversion by the user of the set of search results displayed at the search results user interface using the identified layout; See Col. 7, lines 32-45, (Disclosing a system for tailoring a user interface to a user according to a determined user state. The system comprises an ensemble learning module 112 including a feedback analyzer 118 configured to receive formatting feedback data from at least application user interface 130 indicating at least formatting information associated with user selection of a selectable formatting object 134 which may correspond to one or more user-selectable buttons or other interface objects that allow a user to toggle between different formats available for a particular application user interface, i.e. collecting feedback data with information about conversion by the user of the set of search results displayed at the search results user interface using the identified layout (e.g. feedback includes information about user interaction with interface elements of a GUI).)
Gordon discloses the step of updating the set of threshold values using the collected feedback data. See Col. 8, lines 6-12, (Disclosing a system for tailoring a user interface to a user according to a determined user state. The system comprises an ensemble learning module 112 configured to determine a user state based on one or more threshold values and generate corresponding modification information or commands. Feedback analyzer 118 is configured to update the relevant threshold values to enable better predictive interface formatting functionality over time, i.e. updating the set of threshold values using the collected feedback data.)
BURGESS, GUPTA, AZIMI, Tendler and Gordon are analogous art because they are in the same field of endeavor, dynamic delivery of search results. It would have been obvious to anyone having ordinary skill in the art before the effective filing date to modify the system of BURGESS-GUPTA-AZIMI-Tendler to include the process of updating threshold values relevant to a user state as disclosed by Gordon. Col. 8, lines 6-12 of Gordon discloses that the process of updating threshold values allows for improvements to predictive interface formatting, which represents and improvement to the user experience.
Regarding dependent claim 10,
As discussed above with claim 1, BURGESS-GUPTA-AZIMI discloses all of the limitations.
BURGESS further discloses the step of receiving a plurality of search queries entered by a collection of users of the online system via user interfaces of devices associated with the collection of users; See Paragraph [0199], (The client portion that executes the method may reside in one or more user devices in communication with a server system 108 through one or more networks.) See Paragraph [0035], (The digital assistant system is capable of accepting user requests in the form of a natural language command, i.e. receiving a plurality of search queries entered by a collection of users of the online system via user interfaces of devices associated with the collection of users (e.g. a plurality of user devices owned by a plurality of users may utilize the digital assistant system to make individual user requests).)
applying the search query machine-learning model to the plurality of search queries and user data associated with the collection of users to generate a collection of sets of search results;
See Paragraphs [0230] & [0232], (Natural language processing module 732 may identify an actionable intent or domain based on the user natural language request and generates a structured query to represent the identified actionable intent. Natural language processing module 723 is implemented using machine learning mechanisms to select the one or more candidate actionable intents. Note [0229] wherein natural language processing module 732 may use user-specific information to supplement information in the user input to further define the user intent, i.e. applying the search query machine-learning model to the plurality of search queries and user data associated with the collection of users to generate a collection of sets of search results (e.g. the natural language processing module 732 receives a user request from one of the plurality of user devices. Therefore, the system is capable of receiving requests from multiple computing devices connected to the digital assistant system);)
BURGESS-GUPTA-AZIMI-Tendler does not disclose the step of randomly assigning a layout from a plurality of layouts for a search results user interface at a respective device associated with a respective user of the collection of users for displaying a respective set of search results from the collection of sets of search results;
generating training data by measuring conversion by the respective user of the respective set of search results displayed using the randomly assigned layout;
and training the layout selection machine-learning model using the training data to generate a set of initial values for a set of parameters of the layout selection machine- learning model.
Gordon discloses the step of randomly assigning a layout from a plurality of layouts for a search results user interface at a respective device associated with a respective user of the collection of users for displaying a respective set of search results from the collection of sets of search results; See Col. 7, lines 60-64, (The application user interface 130 include a selectable formatting object 134 that allows users to control the display of the different interface formats. Users may configure their user interface to automatically deliver a user interface format according to their selection, which may include the ability to randomly select and present a new format, i.e. randomly assigning a layout from a plurality of layouts for a search results user interface at a respective device associated with a respective user of the collection of users for displaying a respective set of search results from the collection of sets of search results.)
The examiner notes that while Gordon does not explicitly use the term "search results", the method may be applied to search result interfaces such as those of AZIMI.
generating training data by measuring conversion by the respective user of the respective set of search results displayed using the randomly assigned layout; See Col. 5, lines 26-37, (Ensemble learning module 112 is configured to receive data related to a user state from user state drivers in order to determine a user state and also to determine a user interface context based on a determined user state and user interface context to generate user interface formatting information and/or formatting commands for communication with the application user interface.) See Col. 8, lines 28-39, (User state data, user interface context data and/or feedback data generated from other computer devices and system may be used by ensemble learning module 112 to provide default user sate threshold values, make adjustments to threshold values based on feedback data, and/or provide a basis for initial formatting information or commands that may be adjusted as training and customization provided by feedback analyzer 118 progresses, i.e. generating training data by measuring conversion by the respective user of the respective set of search results displayed using the randomly assigned layout (e.g. Note Col. 5, lines 60-67 wherein user state may be expressed using a numeric score/value along one or more user scales associated with a user interface format. As described in Col. 7, lines 60-64 different interface formats may be assigned to a user randomly.)
and training the layout selection machine-learning model using the training data to generate a set of initial values for a set of parameters of the layout selection machine-learning model. See Col. 8, lines 28-39, (User state data, user interface context data and/or feedback data generated from other computer devices and system may be used by ensemble learning module 112 to provide default user state threshold values, i.e. training the layout selection machine-learning model using the training data to generate a set of initial values for a set of parameters of the layout selection model.
BURGESS, GUPTA, AZIMI, Tendler and Gordon are analogous art because they are in the same field of endeavor, dynamic delivery of search results. It would have been obvious to anyone having ordinary skill in the art before the effective filing date to modify the system of BURGESS-GUPTA-AZIMI-Tendler to include the process of updating threshold values relevant to a user state as disclosed by Gordon. Col. 8, lines 6-12 of Gordon discloses that the process of updating threshold values allows for improvements to predictive interface formatting, which represents and improvement to the user experience.
Regarding dependent claim 11,
As discussed above with claim 10, BURGESS-GUPTA-AZIMI-Tendler-Gordon discloses all of the limitations.
Gordon further discloses the step of collecting feedback data with information about conversion of the set of search results by the user, the set of search results being displayed at the search results user interface using the identified layout; See Col. 8, lines 28-39, (User state data, user interface context data and/or feedback data generated from other computer devices and system may be used by ensemble learning module 112 to provide default user state threshold values, make adjustments to threshold values based on feedback data, and/or provide a basis for initial formatting information or commands that may be adjusted as training and customization provided by feedback analyzer 118 progresses. Note FIG. 5A-5B wherein the graphical user interfaces comprise a display area 506 which may present information as part of a view, i.e. collecting feedback data with information about conversion of the set of search results by the user, the set of search results being displayed at the search results user interface using the identified layout (e.g. the contents of the display area).)
The examiner notes that while Gordon does not explicitly disclose the contents of the display area as being "search results" one of ordinary skill in the art would be able to recognize that the "display are for presenting information" of Gordon may display media such as text, images, video, etc. such as the search results of BURGESS.
Additionally, AZIMI further discloses the step of and re-training the layout selection machine-learning model by updating, using the collected feedback data, to update the set of parameters of the layout selection machine- learning model. See Paragraph [0082], (The auto-assessment algorithms may be upgraded by machine learning and human feedback from the results, wherein said feedback is used to further train the machine learning model, i.e. re-training the layout selection machine-learning model by updating, using the collected feedback data, to update the set of parameters of the layout selection machine- learning model (e.g. updating the auto-assessment algorithm based on attributes of the human feedback).)
Regarding dependent claim 18,
The claim is analogous to the subject matter of dependent claim 10 directed to a non-transitory, computer readable medium and is rejected under similar rationale.
Regarding dependent claim 19,
The claim is analogous to the subject matter of dependent claim 11 directed to a non-transitory, computer readable medium and is rejected under similar rationale.
Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over BURGESS in view of GUPTA and AZIMI as applied to claim 1 above, and further in view of Palaniappan et al. (US Patent No. 11,880,649; Date of Patent: Jan. 23, 2024).
Regarding dependent claim 6,
As discussed above with claim 5, BURGESS-GUPTA-AZIMI discloses all of the limitations.
BURGESS-GUPTA-AZIMI does not disclose the step of generating the context data by retrieving, from a database of the online system, a set of features for the user and engagement data associated with the user for each layout of the plurality of layouts.
Palaniappan discloses the step of generating the context data by retrieving, from a database of the online system, a set of features for the user and engagement data associated with the user for each layout of the plurality of layouts. See Col. 5, lines 44-47, (Memory 196 is configured to store source data 112, data types 14, template archive 142, usage parameters 152, etc. Note Col. 7, lines 37-49 wherein Usage parameters 152 associated with a communication template 144 may include, but are not limited to, an average number of times communications 122 generated based on the communication template 144 were accessed by users 106, an average number of times the communications 122 were reviewed by the users 106 from a start to an end of the communications 122, an average number of times multimedia components 148 included in the communications 122 were used/reviewed by the users 106, and an average number of times the communications 122 were accessed using a particular communication channel 130 associated with the communication template 144.) See Col. 19, lines 56-64, (Transform manager 140 may be configured to determine performance indicators for multimedia components based on performance data associated with the multimedia component. Performance data may indicate a performance of the multimedia component in relation to one or more usage parameters, i.e. generating the context data by retrieving, from a database of the online system, a set of features for the user and engagement data associated with the user for each layout of the plurality of layouts (e.g. transform manager 140 may retrieve usage parameters 152 from memory 16 wherein usage parameters represent user interactions and tendencies associated with a multimedia component).)
BURGESS, GUPTA, AZIMI and Palaniappan are analogous art because they are in the same field of endeavor, dynamic delivery of search results. It would have been obvious to anyone having ordinary skill in the art before the effective filing date to modify the system of BURGESS-GUPTA -AZIMI to include the cognitive engine for determining a most appropriate communication template as disclosed by Palaniappan. Col. 4, lines 41-44 of Palaniappan discloses that the transformation performed on source data by transform manager 140 improves readability of the transformed communication, which represents an improvement in the user experience.
Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over BURGESS in view of GUPTA and AZIMI as applied to claim 12 above, and further in view of Tendler et al. (US PGPUB No. 2024/0004940; Pub. Date: Jan. 4, 2024) and Gordon et al. (US Patent No.: 10,552,183; Date of Patent: Feb. 4, 2020).
Regarding dependent claim 13,
As discussed above with claim 12, BURGESS-GUPTA-AZIMI discloses all of the limitations.
BURGESS-GUPTA-AZIMI does not disclose the step wherein the instructions further cause the processor to perform steps comprising: applying the layout selection machine-learning model to identify, based on a number of search results in the set of search results and a set of threshold values, the layout for the search results user interface;
Tendler discloses the step of applying the layout selection machine-learning model to identify, based on a number of search results in the set of search results and a set of threshold values, the layout for the search results user interface; See Paragraph [0204], (Disclosing a system for providing an improved search interface that provides suggested search terms in a variety of categories. The system may provide a search user interface having at least two dynamic contextual categories that a user may interact with. The search user interface may be updated to show suggested search results and may further determine that a number of search results is less than a predetermined quantity, which prompts the system to display the less than the predetermined quantity of search results without display of the refined suggested search terms in the first contextual category, i.e. identifying, based on a number of search results in the set of search results and a set of threshold values, the layout for the search results user interface (e.g. by determining which elements to display and not display).)
BURGESS, GUPTA, AZIMI and Tendler are analogous art because they are in the same field of endeavor, dynamic graphical user interfaces. It would have been obvious to anyone having ordinary skill in the art before the effective filing date to modify the system of BURGESS-GUPTA-AZIMI to include the method of augmenting a display interface based on a number of search results to be displayed as disclosed by Tendler. Paragraph [0005] of Tendler discloses that the improved search interface allows the system to display refined suggested search terms as a user provides initial search inputs in order to improve the user search process
BURGESS-GUPTA-AZIMI-Tendler does not disclose the step of collecting feedback data with information about conversion by the user of the set of search results displayed at the search results user interface using the identified layout;
updating the set of threshold values using the collected feedback data.
Gordon discloses the step of collecting feedback data with information about conversion by the user of the set of search results displayed at the search results user interface using the identified layout; See Col. 7, lines 32-45, (Disclosing a system for tailoring a user interface to a user according to a determined user state. The system comprises an ensemble learning module 112 including a feedback analyzer 118 configured to receive formatting feedback data from at least application user interface 130 indicating at least formatting information associated with user selection of a selectable formatting object 134 which may correspond to one or more user-selectable buttons or other interface objects that allow a user to toggle between different formats available for a particular application user interface, i.e. collecting feedback data with information about conversion by the user of the set of search results displayed at the search results user interface using the identified layout (e.g. feedback includes information about user interaction with interface elements of a currently presented GUI).)
updating the set of threshold values using the collected feedback data. See Col. 8, lines 6-12, (Disclosing a system for tailoring a user interface to a user according to a determined user state. The system comprises an ensemble learning module 112 configured to determine a user state based on one or more threshold values and generate corresponding modification information or commands. Feedback analyzer 118 is configured to update the relevant threshold values to enable better predictive interface formatting functionality over time, i.e. updating the set of threshold values using the collected feedback data.)
BURGESS, GUPTA, AZIMI, Tendler and Gordon are analogous art because they are in the same field of endeavor, dynamic delivery of search results. It would have been obvious to anyone having ordinary skill in the art before the effective filing date to modify the system of BURGESS-GUPTA-AZIMI-Tendler to include the process of updating threshold values relevant to a user state as disclosed by Gordon. Col. 8, lines 6-12 of Gordon discloses that the process of updating threshold values allows for improvements to predictive interface formatting, which represents and improvement to the user experience.
Claim(s) 14 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over BURGESS in view of GUPT and AZIMI and Palaniappan as applied to claim 12 above, and further in view of Xu (US PGPUB No. 2023/0316373; Pub. Date: Oct. 5, 2023).
Regarding dependent claim 14,
As discussed above with claim 12, BURGESS-GUPTA-AZIMI discloses all of the limitations.
BURGESS-GUPTA-AZIMI does not disclose the step wherein the instructions further cause the processor to perform steps comprising: applying the layout selection machine-learning model to identify, based on context data associated with the search query and result metadata associated with the set of search results, a likelihood for conversion by the user for each layout of a plurality of layouts for the search results user interface;
and applying the layout selection machine-learning model to identify the layout for the search results user interface that has a highest likelihood for conversion by the user among the plurality of layouts.
Xu discloses the step applying the layout selection machine-learning model to identify, based on context data associated with the search query and result metadata associated with the set of search results, a likelihood for conversion by the user for each layout of a plurality of layouts for the search results user interface; See Paragraph [0033], (Disclosing a system for providing a facet-based context-aware user interface representing an online purchasing system. The facet-based context-aware user interface 202 allows users to navigate product cards determined based at least in part on a user intent.) See Paragraph [0051], (Machine-learning techniques are used to infer, for each product, which attributes changed and which stayed the same during a user search, i.e. applying the layout selection machine-learning model to identify, based on context data (e.g. the user intent) associated with the search query (e.g. the user search queries and refinement) and result metadata associated with the set of search results (e.g. product attribute data for products and substitute products), a likelihood for conversion by the user for each layout of a plurality of layouts for the search results user interface (e.g. user intent is determined based on search queries, refinements, and other user interactions and is used to select a particular facet for the facet-based context-aware user interface 202 such that the user interface reflects products of interest to a user).)
and applying the layout selection machine-learning model to identify the layout for the search results user interface that has a highest likelihood for conversion by the user among the plurality of layouts. See Paragraph [0041], (The faced-based context-aware widget is generated according to historical customer behavior data collected as a customer performs a search for a product. Historical customer behavior data is used to determine which facets of the plurality of products are most important to the user. These facets are ranked individually and used to identify products to be displayed as a product card of the widget, i.e. applying the layout selection machine-learning model to identify the layout for the search results user interface that has a highest likelihood for conversion by the user among the plurality of layouts.)
BURGESS, GUPTA, AZIMI and Xu are analogous art because they are in the same field of endeavor, query processing. It would have been obvious to anyone having ordinary skill in the art before the effective filing date to modify the system of BURGESS-GUPTA-AZIMI to include the process of determining a user intent for displaying product items of a context-aware user interface as disclosed by Xu. Paragraph [0033] of Xu discloses that the determination of user intent allows the system to provide individual users with products that are relevant to the user. The system may filter out products that lack particular facets that a user does not favor and instead include products that are favored. This represents an improvement in the user experience by providing users with products relevant to their interests.
Regarding dependent claim 16,
The claim is analogous to the subject matter of dependent claim 7 directed to a non-transitory, computer readable medium and is rejected under similar rationale.
Claim(s) 9 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over BURGESS in view of GUPTA and AZIMI as applied to claim 5 above, and further in view of LI et al. (US PGPUB No. 2023/0334350; Pub. Date: Oct. 19, 2023).
Regarding dependent claim 9,
As discussed above with claim 5, BURGESS-GUPTA-AZIMI discloses all of the limitations.
BURGESS-GUPTA-AZIMI does not disclose the step wherein generating the result metadata further comprises extracting, from the set of search results, a third density of advertisements in the set of search results.
LI discloses the step wherein generating the result metadata further comprises extracting, from the set of search results, a third density of advertisements in the set of search results. See Paragraph [0030], (Disclosing a system for receiving data indicating a matching density defined as a number of matches per query. The system may gather advertising data 20 which may indicate a matching density 28 including numbers of advertisements output per query, i.e. generating the result metadata by extracting, from the set of search results, a density of advertisements in the set of search results.)
BURGESS, GUPTA, AZIMI and LI are analogous art because they are in the same field of endeavor, query processing. It would have been obvious to anyone having ordinary skill in the art before the effective filing date to modify the system of BURGESS-GUPTA-AZIMI to include the process determining query metadata referring to densities of query metrics as disclosed by LI. Paragraph [0093] of LI discloses that the causal modeling may allow a search engine provider and/or customers of the search engine provider to measure the effectiveness with which advertisements are matched to search queries, which may strongly affect both revenue collected by the search engine and performance of the customer's advertisement campaign.
Regarding dependent claim 17,
The claim is analogous to the subject matter of dependent claim 9 directed to a non-transitory, computer readable medium and is rejected under similar rationale.
Response to Arguments
Applicant’s arguments with respect to claim(s) 1, 12 and 20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Applicant’s amendments necessitated the new grounds of rejection presented in this Office Action.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Fernando M Mari whose telephone number is (571)272-2498. The examiner can normally be reached Monday-Friday 7am-4pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ann J. Lo can be reached at (571) 272-9767. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/FMMV/Examiner, Art Unit 2159 /ANN J LO/Supervisory Patent Examiner, Art Unit 2159