Prosecution Insights
Last updated: April 19, 2026
Application No. 18/839,975

SEARCH RESULT DISPLAY METHOD, COMPUTER DEVICE, AND STORAGE MEDIUM

Non-Final OA §103
Filed
Aug 20, 2024
Examiner
GIULIANI, GIUSEPPI J
Art Unit
2153
Tech Center
2100 — Computer Architecture & Software
Assignee
DOUYIN VISION CO., LTD.
OA Round
3 (Non-Final)
58%
Grant Probability
Moderate
3-4
OA Rounds
3y 3m
To Grant
65%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
162 granted / 279 resolved
+3.1% vs TC avg
Moderate +7% lift
Without
With
+7.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
25 currently pending
Career history
304
Total Applications
across all art units

Statute-Specific Performance

§101
11.4%
-28.6% vs TC avg
§103
53.7%
+13.7% vs TC avg
§102
14.8%
-25.2% vs TC avg
§112
12.7%
-27.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 279 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination - 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17[e], was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17[e] has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR1.114. The applicant’s submission for RCE filed on 3 February 2026 has been entered. Remarks This action is in response to the applicant’s RCE filed 3 February 2026, which is in response to the USPTO office action mailed 3 November 2025. Claims 1, 4, 13, 14 and 17 are amended. Claims 2, 12 and 15 are cancelled. Claims 1, 3-11, 13, 14 and 16-21 are currently pending. Response to Arguments With respect to the 35 USC §103 rejection of claims 1, 3-11, 13, 14 and 16-21, the applicant’s arguments are moot in view of a new grounds of rejection, as necessitated by the applicant's amendments. Claim Rejections -35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3-11, 13, 14 and 16-21 are rejected under 35 U.S.C. 103 as being unpatentable over Zheng et al., US 2021/0271886 A1 (hereinafter “Zheng”) in view of NELSON et al., US 2017/0083180 A1 (hereinafter “Nelson”) in further view of Gunawardena, US 2020/0110943 A1 (hereinafter “Gunawardena”). Claim 1: Zheng teaches a search result display method, comprising: obtaining, in response to a search request, a preview resource of a target video that matches the search request (Zheng, [Fig. 3] note 38, [0064] note FIG. 3 therefore illustrates a workflow list view on the UI 31. Each workflow 33-36 links to a video player that allows the user to navigate to the next or previous step. A text search command box 38 also is provided for keyword searching of the data of the workflow information generated by the AI system 10), wherein the preview resource comprises video preview information of the target video and structured information for answering the search request, the structured information is information extracted from the target video, and the structured information comprises at least prerequisite information, answer information, and time information that respectively corresponds to the prerequisite information and the answer information in the target video (Zheng, [0058] note the AI module 15 analyzes, edits, and organizes the digital workflow content and automatically generates a step-by-step Interactive How-to Video using the digital workflow content or generate sub-components of a video, [0066] note steps 45 are shown in successive time sequence, [0084] note AI module 15 therefore may: perform auto-tagging of key words and key images; auto-segment videos into steps; auto-summarize step names, [Fig. 5] note video segments associated with steps 45-01 to 45-14 extracted by the AI system, including an image, description (e.g. “01. Introduction”, “02. Tools”, etc…) and a video segment duration); and displaying the video preview information on a search result page, and displaying the structured information at an associated position of the video preview information, wherein the video preview information is configured to be displayed as an image, and the prerequisite information and the answer information are configured to be displayed as text (Zheng, [Fig. 5] note video segments associated with steps 45-01 to 45-14 extracted by the AI system, including an image, description (e.g. “01. Introduction”, “02. Tools”, etc…) and a video segment duration, [0066] note The UI 40 includes a step navigation aid 42 that allows a user to navigate to a specific task in a workflow. When the step navigation aid 41 is clicked or activated, FIG. 5 shows a UI 44 showing all the steps 45 (steps 45-01 through 45-14) that were extracted by the AI Stephanie system 10, which are automatically shown). Zheng does not explicitly teach the prerequisite information and the answer information corresponding to the prerequisite information are displayed by using multi-level titles, the multi-level titles comprise a first-level title and a second-level title, the prerequisite information is configured as the first-level title, and the answer information corresponding to the prerequisite information is configured as the second-level title; timestamp information; and wherein the displaying the video preview information on a search result page, and displaying the structured information at an associated position of the video preview information comprises: displaying the video preview information in a first area of the search result page, successively displaying the timestamp information in a second area adjacent to the first area, and displaying, at a corresponding position of each piece of timestamp information, prerequisite information and answer information corresponding to the piece of time information. However, Nelson teaches the prerequisite information and the answer information corresponding to the prerequisite information are displayed by using multi-level titles, the multi-level titles comprise a first-level title and a second-level title, the prerequisite information is configured as the first-level title, and the answer information corresponding to the prerequisite information is configured as the second-level title (Nelson, [0051] note After the search system 300 executes a search, the user 10 is provided with (by way of the user device displaying 202) search results 215 organized/grouped into one or more cards 220 or one or more card stacks 240… grouping system 400 groups the search results 215 into cards 220, the cards into card stacks 240, first-level card stacks 240 into second- level card stacks 240, or second-level card stacks 240 into third-level card stacks 240, and so on, [Fig. 4C], [0063] note grouping system 400 groups the search results 215 into cards 220 and subsequently first-level stacks 240a… each card 220 represents a collection of similar search results 215… the screen 202 may show a header 252 of the first card 220a that includes the name of the first card (Card A1), under the header 252, the screen 202 shows the search results 215 associated with the first card (Card A1)). It would have been obvious to one of ordinary skill in the art at the effective filing date of the application to combine the step-by-step workflow including video segments of Zheng with the search card stacking of Nelson according to known methods (i.e. displaying search results as multi-level card stacks). Motivation for doing so is that it is desirable to have a system that allows a user to manage an amount content displayed on the user device (Nelson, [0042]). Zheng and Nelson do not explicitly teach timestamp information; and wherein the displaying the video preview information on a search result page, and displaying the structured information at an associated position of the video preview information comprises: displaying the video preview information in a first area of the search result page, successively displaying the timestamp information in a second area adjacent to the first area, and displaying, at a corresponding position of each piece of timestamp information, prerequisite information and answer information corresponding to the piece of time information. However, Gunawardena teaches this (Gunawardena, [Fig. 3], [0063] note Users can also search videos based on a query term. As a nonlimiting example, FIG. 3 depicts the results from a global query as a result of searching over the entire collection using the term “for loop”. It is noted that queries can be done across videos (inter) and within video (intra) to find the most closely matched video clip(s). For instance, as shown in FIG. 3, the query resulted in many video clips (shown in the search results), and upon selecting one of the hit videos (2.C An alternative: the for loop), the user can see the locations of that particular video, where the query term appears). It would have been obvious to one of ordinary skill in the art at the effective filing date of the application to combine the step-by-step workflow including video segments of Zheng and Nelson with the video search of Gunawardena according to known methods (i.e. displaying video clips along with locations in which a query term appears). Motivation for doing so is that this allows users to jump to the exact location of the specific video (Gunawardena, [0063]), thereby improving user experience. Claim 3: Zheng, Nelson and Gunawardena teach the search result display method according to claim 2, wherein at least one piece of prerequisite information is comprised, and each piece of prerequisite information corresponds to at least one piece of answer information; and the displaying the video preview information on a search result page, and displaying the structured information at an associated position of the video preview information comprises: displaying the video preview information in the first area of the search result page, successively displaying, in the second area adjacent to the first area, each piece of prerequisite information and timestamp information corresponding to the piece of prerequisite information in the structured information, and displaying, under each piece of prerequisite information, at least one piece of corresponding answer information and timestamp information corresponding to each piece of answer information (Gunawardena, [Fig. 3], [0063] note Users can also search videos based on a query term. As a nonlimiting example, FIG. 3 depicts the results from a global query as a result of searching over the entire collection using the term “for loop”. It is noted that queries can be done across videos (inter) and within video (intra) to find the most closely matched video clip(s). For instance, as shown in FIG. 3, the query resulted in many video clips (shown in the search results), and upon selecting one of the hit videos (2.C An alternative: the for loop), the user can see the locations of that particular video, where the query term appears). Claim 4: Zheng, Nelson and Gunawardena teach the search result display method according to claim 1, wherein the displaying the video preview information on a search result page, and displaying the structured information at an associated position of the video preview information comprises: when the structured information comprises one piece of prerequisite information and a plurality of pieces of answer information, displaying the video preview information in the first area of the search result page, displaying, in the second area adjacent to the first area, the prerequisite information and timestamp information corresponding to the prerequisite information, and successively displaying, under the prerequisite information, the plurality of pieces of answer information and timestamp information respectively corresponding to the pieces of answer information in a sequence of timestamps corresponding to the plurality of pieces of answer information, wherein the one piece of prerequisite information and the plurality of pieces of answer information are displayed by using the multi-level titles; or when the structured information comprises a plurality of pieces of prerequisite information and each piece of prerequisite information corresponds to a plurality of pieces of answer information, displaying the video preview information in the first area of the search result page, successively displaying, in the second area adjacent to the first area, the pieces of prerequisite information and timestamp information corresponding to the pieces of prerequisite information in a sequence of timestamps of the pieces of prerequisite information, and successively displaying, under each piece of prerequisite information, pieces of answer information corresponding to the piece of prerequisite information and timestamp information respectively corresponding to the pieces of answer information in a sequence of timestamps of the pieces of answer information, wherein the each piece of prerequisite information and the plurality of pieces of answer information corresponding to the each piece of prerequisite information are displayed by using the multi-level titles. (Gunawardena, [Fig. 3], [0063] note Users can also search videos based on a query term. As a nonlimiting example, FIG. 3 depicts the results from a global query as a result of searching over the entire collection using the term “for loop”. It is noted that queries can be done across videos (inter) and within video (intra) to find the most closely matched video clip(s). For instance, as shown in FIG. 3, the query resulted in many video clips (shown in the search results), and upon selecting one of the hit videos (2.C An alternative: the for loop), the user can see the locations of that particular video, where the query term appears). Claim 5: Zheng, Nelson and Gunawardena teach the search result display method according to claim 1, wherein the displaying the structured information at an associated position of the video preview information comprises: determining, based on an answer type of answer information corresponding to each piece of prerequisite information, a display form of each piece of prerequisite information and the corresponding answer information; and displaying each piece of prerequisite information and the corresponding answer information at the associated position of the video preview information based on the display form (Zheng, [Fig. 6], [0067] note the workflow 32 with search requests. The search button 47 links or opens the search UI 48, which contains a search command bar 49. With the UI 48 of FIG. 6, users can look for a specific object or objects in any of the steps of a workflow, either by typing in keywords of what he/she is looking for… The search results can be specific steps 45-01 to 45-14 or specific video segments within the steps in which the keyword is embedded such phrases in which the word is spoken or an object is displayed. The search term may also be highlighted in the results, such as in a portion of the transcribed text). Claim 6: Zheng, Nelson and Gunawardena teach the search result display method according to claim 5, wherein the answer type of the answer information corresponding to each piece of prerequisite information is determined in the following manner: if the target video comprises a plurality of pieces of step information corresponding to the search request, determining that the answer type is a first answer type, wherein each piece of answer information corresponds to one piece of step information; or if the target video comprises a plurality of parts of answer information corresponding to the search request, and each part of answer information comprises a plurality of parts of sub-answer information, determining that the answer type is a second answer type (Zheng, [Fig. 6], [0067] note the workflow 32 with search requests. The search button 47 links or opens the search UI 48, which contains a search command bar 49. With the UI 48 of FIG. 6, users can look for a specific object or objects in any of the steps of a workflow, either by typing in keywords of what he/she is looking for… The search results can be specific steps 45-01 to 45-14 or specific video segments within the steps in which the keyword is embedded such phrases in which the word is spoken or an object is displayed. The search term may also be highlighted in the results, such as in a portion of the transcribed text; i.e. specific video segments within the steps reads on sub-answer information). Claim 7: Zheng, Nelson and Gunawardena teach the search result display method according to claim 6, wherein in response to the answer type being the first answer type, the displaying each piece of prerequisite information and the corresponding answer information at the associated position of the video preview information based on the display form comprises: displaying, at the associated position of the video preview information, timestamp information of the prerequisite information and each part of answer information in a sequence of timestamps corresponding to the prerequisite information and the corresponding answer information (Gunawardena, [Fig. 3], [0063] note Users can also search videos based on a query term. As a nonlimiting example, FIG. 3 depicts the results from a global query as a result of searching over the entire collection using the term “for loop”. It is noted that queries can be done across videos (inter) and within video (intra) to find the most closely matched video clip(s). For instance, as shown in FIG. 3, the query resulted in many video clips (shown in the search results), and upon selecting one of the hit videos (2.C An alternative: the for loop), the user can see the locations of that particular video, where the query term appears). Claim 8: Zheng, Nelson and Gunawardena teach the search result display method according to claim 6, wherein in response to the answer type corresponding to the answer information being the second answer type, the displaying each piece of prerequisite information and the corresponding answer information at the associated position of the video preview information based on the display form comprises: displaying each part of answer information at the associated position of the video preview information; and displaying, in response to a trigger operation for any piece of answer information, a plurality of pieces of sub-answer information corresponding to the piece of answer information (Zheng, [Fig. 6], [0067] note the workflow 32 with search requests. The search button 47 links or opens the search UI 48, which contains a search command bar 49. With the UI 48 of FIG. 6, users can look for a specific object or objects in any of the steps of a workflow, either by typing in keywords of what he/she is looking for… The search results can be specific steps 45-01 to 45-14 or specific video segments within the steps in which the keyword is embedded such phrases in which the word is spoken or an object is displayed. The search term may also be highlighted in the results, such as in a portion of the transcribed text; i.e. specific video segments within the steps reads on sub-answer information). Claim 9: Zheng, Nelson and Gunawardena teach the search result display method according claim 1, wherein the structured information further comprises recommendation information associated with the search request (Zheng, [0065] note the workflow information not only includes the text data converted from the audio portion, but also additional data identified by the video analysis, which may then be keyword searched using the text search feature or voice search feature); and the displaying the structured information at an associated position of the video preview information comprises: displaying, at a corresponding position of each piece of timestamp information, prerequisite information and answer information corresponding to the piece of timestamp information, and displaying, under at least one piece of answer information, the recommendation information associated with the search request and the answer information (Gunawardena, [Fig. 3], [0063] note Users can also search videos based on a query term. As a nonlimiting example, FIG. 3 depicts the results from a global query as a result of searching over the entire collection using the term “for loop”. It is noted that queries can be done across videos (inter) and within video (intra) to find the most closely matched video clip(s). For instance, as shown in FIG. 3, the query resulted in many video clips (shown in the search results), and upon selecting one of the hit videos (2.C An alternative: the for loop), the user can see the locations of that particular video, where the query term appears). Claim 10: Zheng, Nelson and Gunawardena teach the search result display method according to claim 1, further comprising: playing, in response to a trigger operation for the prerequisite information or any part of answer information in the structured information, the target video from a timestamp position corresponding to the prerequisite information or the part of answer information (Gunawardena, [0063] note This allows users to jump to the exact location of the specific video). Claim 11: Zheng, Nelson and Gunawardena teach the search result display method according to claim 1, wherein in response to a plurality of target videos being comprised, the method further comprises: determining degrees of association between structured information of the plurality of target videos and the search request, and sorting the plurality of target videos based on the degrees of association; and the displaying the video preview information, and displaying the structured information at an associated position of the video preview information comprises: successively displaying video preview information of the plurality of target videos based on a result of sorting the target videos, and displaying the structured information at associated positions of the video preview information (Zheng, [Fig. 6], [0067] note With the UI 48 of FIG. 6, users can look for a specific object or objects in any of the steps of a workflow, either by typing in keywords of what he/she is looking for into the search command bar 49 or using their voice commands, such as “Stephanie, Show me bolts and nuts”… Once the search request is entered, such as by searching the keyword “nut”, the search request will be converted into embeddings, a high dimensional mathematical vector, wherein FIG. 6 illustrates a subset of the steps 45 that have been tagged by the AI system 10 with keyword data or other search data associated with the search request. In other words, the subset of steps 45 have the term “nut” associated with them or other terms having similar word embeddings since they may refer to a nut or words with similar meanings in the audio data or in the video data. The search results can be specific steps 45-01 to 45-14 or specific video segments within the steps in which the keyword is embedded such phrases in which the word is spoken or an object is displayed). Claim 13: Zheng teaches a computer device, comprising: a processor and a memory, wherein the memory stores machine-readable instructions executable by the processor, the processor is configured to execute the machine-readable instructions stored in the memory, and when the machine-readable instructions are executed by the processor, the processor performs a search result display method, and the search result display method comprises: obtaining, in response to a search request, a preview resource of a target video that matches the search request (Zheng, [Fig. 3] note 38, [0064] note FIG. 3 therefore illustrates a workflow list view on the UI 31. Each workflow 33-36 links to a video player that allows the user to navigate to the next or previous step. A text search command box 38 also is provided for keyword searching of the data of the workflow information generated by the AI system 10), wherein the preview resource comprises video preview information of the target video and structured information for answering the search request, the structured information is information extracted from the target video, and the structured information comprises at least prerequisite information, answer information, and time information that respectively corresponds to the prerequisite information and the answer information in the target video (Zheng, [0058] note the AI module 15 analyzes, edits, and organizes the digital workflow content and automatically generates a step-by-step Interactive How-to Video using the digital workflow content or generate sub-components of a video, [0066] note steps 45 are shown in successive time sequence, [0084] note AI module 15 therefore may: perform auto-tagging of key words and key images; auto-segment videos into steps; auto-summarize step names, [Fig. 5] note video segments associated with steps 45-01 to 45-14 extracted by the AI system, including an image, description (e.g. “01. Introduction”, “02. Tools”, etc…) and a video segment duration); and displaying the video preview information on a search result page, and displaying the structured information at an associated position of the video preview information, wherein the video preview information is configured to be displayed as an image, and the prerequisite information and the answer information are configured to be displayed as text (Zheng, [Fig. 5] note video segments associated with steps 45-01 to 45-14 extracted by the AI system, including an image, description (e.g. “01. Introduction”, “02. Tools”, etc…) and a video segment duration, [0066] note The UI 40 includes a step navigation aid 42 that allows a user to navigate to a specific task in a workflow. When the step navigation aid 41 is clicked or activated, FIG. 5 shows a UI 44 showing all the steps 45 (steps 45-01 through 45-14) that were extracted by the AI Stephanie system 10, which are automatically shown). Zheng does not explicitly teach the prerequisite information and the answer information corresponding to the prerequisite information are displayed by using multi-level titles, the multi-level titles comprise a first-level title and a second-level title, the prerequisite information is configured as the first-level title, and the answer information corresponding to the prerequisite information is configured as the second-level title; timestamp information; and wherein the displaying the video preview information on a search result page, and displaying the structured information at an associated position of the video preview information comprises: displaying the video preview information in a first area of the search result page, successively displaying the timestamp information in a second area adjacent to the first area, and displaying, at a corresponding position of each piece of timestamp information, prerequisite information and answer information corresponding to the piece of time information. However, Nelson teaches the prerequisite information and the answer information corresponding to the prerequisite information are displayed by using multi-level titles, the multi-level titles comprise a first-level title and a second-level title, the prerequisite information is configured as the first-level title, and the answer information corresponding to the prerequisite information is configured as the second-level title (Nelson, [0051] note After the search system 300 executes a search, the user 10 is provided with (by way of the user device displaying 202) search results 215 organized/grouped into one or more cards 220 or one or more card stacks 240… grouping system 400 groups the search results 215 into cards 220, the cards into card stacks 240, first-level card stacks 240 into second- level card stacks 240, or second-level card stacks 240 into third-level card stacks 240, and so on, [Fig. 4C], [0063] note grouping system 400 groups the search results 215 into cards 220 and subsequently first-level stacks 240a… each card 220 represents a collection of similar search results 215… the screen 202 may show a header 252 of the first card 220a that includes the name of the first card (Card A1), under the header 252, the screen 202 shows the search results 215 associated with the first card (Card A1)). It would have been obvious to one of ordinary skill in the art at the effective filing date of the application to combine the step-by-step workflow including video segments of Zheng with the search card stacking of Nelson according to known methods (i.e. displaying search results as multi-level card stacks). Motivation for doing so is that it is desirable to have a system that allows a user to manage an amount content displayed on the user device (Nelson, [0042]). Zheng and Nelson do not explicitly teach timestamp information; and wherein the displaying the video preview information on a search result page, and displaying the structured information at an associated position of the video preview information comprises: displaying the video preview information in a first area of the search result page, successively displaying the timestamp information in a second area adjacent to the first area, and displaying, at a corresponding position of each piece of timestamp information, prerequisite information and answer information corresponding to the piece of time information. However, Gunawardena teaches this (Gunawardena, [Fig. 3], [0063] note Users can also search videos based on a query term. As a nonlimiting example, FIG. 3 depicts the results from a global query as a result of searching over the entire collection using the term “for loop”. It is noted that queries can be done across videos (inter) and within video (intra) to find the most closely matched video clip(s). For instance, as shown in FIG. 3, the query resulted in many video clips (shown in the search results), and upon selecting one of the hit videos (2.C An alternative: the for loop), the user can see the locations of that particular video, where the query term appears). It would have been obvious to one of ordinary skill in the art at the effective filing date of the application to combine the step-by-step workflow including video segments of Zheng and Nelson with the video search of Gunawardena according to known methods (i.e. displaying video clips along with locations in which a query term appears). Motivation for doing so is that this allows users to jump to the exact location of the specific video (Gunawardena, [0063]), thereby improving user experience. Claim 14: Zheng teaches a non-transitory computer-readable storage medium, wherein the non-transitory computer-readable storage medium stores a computer program, and when the computer program is run by a computer device, the computer device performs a search result display method, and the search result display method comprises: obtaining, in response to a search request, a preview resource of a target video that matches the search request (Zheng, [Fig. 3] note 38, [0064] note FIG. 3 therefore illustrates a workflow list view on the UI 31. Each workflow 33-36 links to a video player that allows the user to navigate to the next or previous step. A text search command box 38 also is provided for keyword searching of the data of the workflow information generated by the AI system 10), wherein the preview resource comprises video preview information of the target video and structured information for answering the search request, the structured information is information extracted from the target video, and the structured information comprises at least prerequisite information, answer information, and times information that respectively corresponds to the prerequisite information and the answer information in the target video (Zheng, [0058] note the AI module 15 analyzes, edits, and organizes the digital workflow content and automatically generates a step-by-step Interactive How-to Video using the digital workflow content or generate sub-components of a video, [0066] note steps 45 are shown in successive time sequence, [0084] note AI module 15 therefore may: perform auto-tagging of key words and key images; auto-segment videos into steps; auto-summarize step names, [Fig. 5] note video segments associated with steps 45-01 to 45-14 extracted by the AI system, including an image, description (e.g. “01. Introduction”, “02. Tools”, etc…) and a video segment duration); and displaying the video preview information on a search result page, and displaying the structured information at an associated position of the video preview information, wherein the video preview information is configured to be displayed as an image, and the prerequisite information and the answer information are configured to be displayed as text (Zheng, [Fig. 5] note video segments associated with steps 45-01 to 45-14 extracted by the AI system, including an image, description (e.g. “01. Introduction”, “02. Tools”, etc…) and a video segment duration, [0066] note The UI 40 includes a step navigation aid 42 that allows a user to navigate to a specific task in a workflow. When the step navigation aid 41 is clicked or activated, FIG. 5 shows a UI 44 showing all the steps 45 (steps 45-01 through 45-14) that were extracted by the AI Stephanie system 10, which are automatically shown). Zheng does not explicitly teach the prerequisite information and the answer information corresponding to the prerequisite information are displayed by using multi-level titles, the multi-level titles comprise a first-level title and a second-level title, the prerequisite information is configured as the first-level title, and the answer information corresponding to the prerequisite information is configured as the second-level title; timestamp information; and wherein the displaying the video preview information on a search result page, and displaying the structured information at an associated position of the video preview information comprises: displaying the video preview information in a first area of the search result page, successively displaying the timestamp information in a second area adjacent to the first area, and displaying, at a corresponding position of each piece of timestamp information, prerequisite information and answer information corresponding to the piece of time information. However, Nelson teaches the prerequisite information and the answer information corresponding to the prerequisite information are displayed by using multi-level titles, the multi-level titles comprise a first-level title and a second-level title, the prerequisite information is configured as the first-level title, and the answer information corresponding to the prerequisite information is configured as the second-level title (Nelson, [0051] note After the search system 300 executes a search, the user 10 is provided with (by way of the user device displaying 202) search results 215 organized/grouped into one or more cards 220 or one or more card stacks 240… grouping system 400 groups the search results 215 into cards 220, the cards into card stacks 240, first-level card stacks 240 into second- level card stacks 240, or second-level card stacks 240 into third-level card stacks 240, and so on, [Fig. 4C], [0063] note grouping system 400 groups the search results 215 into cards 220 and subsequently first-level stacks 240a… each card 220 represents a collection of similar search results 215… the screen 202 may show a header 252 of the first card 220a that includes the name of the first card (Card A1), under the header 252, the screen 202 shows the search results 215 associated with the first card (Card A1)). It would have been obvious to one of ordinary skill in the art at the effective filing date of the application to combine the step-by-step workflow including video segments of Zheng with the search card stacking of Nelson according to known methods (i.e. displaying search results as multi-level card stacks). Motivation for doing so is that it is desirable to have a system that allows a user to manage an amount content displayed on the user device (Nelson, [0042]). Zheng and Nelson do not explicitly teach timestamp information; and wherein the displaying the video preview information on a search result page, and displaying the structured information at an associated position of the video preview information comprises: displaying the video preview information in a first area of the search result page, successively displaying the timestamp information in a second area adjacent to the first area, and displaying, at a corresponding position of each piece of timestamp information, prerequisite information and answer information corresponding to the piece of time information. However, Gunawardena teaches this (Gunawardena, [Fig. 3], [0063] note Users can also search videos based on a query term. As a nonlimiting example, FIG. 3 depicts the results from a global query as a result of searching over the entire collection using the term “for loop”. It is noted that queries can be done across videos (inter) and within video (intra) to find the most closely matched video clip(s). For instance, as shown in FIG. 3, the query resulted in many video clips (shown in the search results), and upon selecting one of the hit videos (2.C An alternative: the for loop), the user can see the locations of that particular video, where the query term appears). It would have been obvious to one of ordinary skill in the art at the effective filing date of the application to combine the step-by-step workflow including video segments of Zheng and Nelson with the video search of Gunawardena according to known methods (i.e. displaying video clips along with locations in which a query term appears). Motivation for doing so is that this allows users to jump to the exact location of the specific video (Gunawardena, [0063]), thereby improving user experience. Claim 16: Zheng, Nelson and Gunawardena teach the computer device according to claim 13, wherein at least one piece of prerequisite information is comprised, and each piece of prerequisite information corresponds to at least one piece of answer information; and the displaying the video preview information on a search result page, and displaying the structured information at an associated position of the video preview information comprises: displaying the video preview information in the first area of the search result page, successively displaying, in the second area adjacent to the first area, each piece of prerequisite information and timestamp information corresponding to the piece of prerequisite information in the structured information, and displaying, under each piece of prerequisite information, at least one piece of corresponding answer information and timestamp information corresponding to each piece of answer information (Gunawardena, [Fig. 3], [0063] note Users can also search videos based on a query term. As a nonlimiting example, FIG. 3 depicts the results from a global query as a result of searching over the entire collection using the term “for loop”. It is noted that queries can be done across videos (inter) and within video (intra) to find the most closely matched video clip(s). For instance, as shown in FIG. 3, the query resulted in many video clips (shown in the search results), and upon selecting one of the hit videos (2.C An alternative: the for loop), the user can see the locations of that particular video, where the query term appears). Claim 17: Zheng, Nelson and Gunawardena teach the computer device according to claim 13, wherein the displaying the video preview information on a search result page, and displaying the structured information at an associated position of the video preview information comprises: when the structured information comprises one piece of prerequisite information and a plurality of pieces of answer information, displaying the video preview information in the first area of the search result page, displaying, in the second area adjacent to the first area, the prerequisite information and timestamp information corresponding to the prerequisite information, and successively displaying, under the prerequisite information, the plurality of pieces of answer information and timestamp information respectively corresponding to the pieces of answer information in a sequence of timestamps corresponding to the plurality of pieces of answer information, wherein the one piece of prerequisite information and the plurality of pieces of answer information are displayed by using the multi-level titles; or when the structured information comprises a plurality of pieces of prerequisite information and each piece of prerequisite information corresponds to a plurality of pieces of answer information, displaying the video preview information in the first area of the search result page, successively displaying, in the second area adjacent to the first area, the pieces of prerequisite information and timestamp information corresponding to the pieces of prerequisite information in a sequence of timestamps of the pieces of prerequisite information, and successively displaying, under each piece of prerequisite information, pieces of answer information corresponding to the piece of prerequisite information and timestamp information respectively corresponding to the pieces of answer information in a sequence of timestamps of the pieces of answer information, wherein the each piece of prerequisite information and the plurality of pieces of answer information corresponding to the each piece of prerequisite information are displayed by using the multi-level titles (Gunawardena, [Fig. 3], [0063] note Users can also search videos based on a query term. As a nonlimiting example, FIG. 3 depicts the results from a global query as a result of searching over the entire collection using the term “for loop”. It is noted that queries can be done across videos (inter) and within video (intra) to find the most closely matched video clip(s). For instance, as shown in FIG. 3, the query resulted in many video clips (shown in the search results), and upon selecting one of the hit videos (2.C An alternative: the for loop), the user can see the locations of that particular video, where the query term appears). Claim 18: Zheng, Nelson and Gunawardena teach the computer device according to claim 13, wherein the displaying the structured information at an associated position of the video preview information comprises: determining, based on an answer type of answer information corresponding to each piece of prerequisite information, a display form of each piece of prerequisite information and the corresponding answer information; and displaying each piece of prerequisite information and the corresponding answer information at the associated position of the video preview information based on the display form (Zheng, [Fig. 6], [0067] note the workflow 32 with search requests. The search button 47 links or opens the search UI 48, which contains a search command bar 49. With the UI 48 of FIG. 6, users can look for a specific object or objects in any of the steps of a workflow, either by typing in keywords of what he/she is looking for… The search results can be specific steps 45-01 to 45-14 or specific video segments within the steps in which the keyword is embedded such phrases in which the word is spoken or an object is displayed. The search term may also be highlighted in the results, such as in a portion of the transcribed text). Claim 19: Zheng, Nelson and Gunawardena teach the computer device according to claim 18, wherein the answer type of the answer information corresponding to each piece of prerequisite information is determined in the following manner: if the target video comprises a plurality of pieces of step information corresponding to the search request, determining that the answer type is a first answer type, wherein each piece of answer information corresponds to one piece of step information; or if the target video comprises a plurality of parts of answer information corresponding to the search request, and each part of answer information comprises a plurality of parts of sub-answer information, determining that the answer type is a second answer type (Zheng, [Fig. 6], [0067] note the workflow 32 with search requests. The search button 47 links or opens the search UI 48, which contains a search command bar 49. With the UI 48 of FIG. 6, users can look for a specific object or objects in any of the steps of a workflow, either by typing in keywords of what he/she is looking for… The search results can be specific steps 45-01 to 45-14 or specific video segments within the steps in which the keyword is embedded such phrases in which the word is spoken or an object is displayed. The search term may also be highlighted in the results, such as in a portion of the transcribed text; i.e. specific video segments within the steps reads on sub-answer information). Claim 20: Zheng, Nelson and Gunawardena teach the computer device according to claim 19, wherein in response to the answer type being the first answer type, the displaying each piece of prerequisite information and the corresponding answer information at the associated position of the video preview information based on the display form comprises: displaying, at the associated position of the video preview information, timestamp information of the prerequisite information and each part of answer information in a sequence of timestamps corresponding to the prerequisite information and the corresponding answer information (Gunawardena, [Fig. 3], [0063] note Users can also search videos based on a query term. As a nonlimiting example, FIG. 3 depicts the results from a global query as a result of searching over the entire collection using the term “for loop”. It is noted that queries can be done across videos (inter) and within video (intra) to find the most closely matched video clip(s). For instance, as shown in FIG. 3, the query resulted in many video clips (shown in the search results), and upon selecting one of the hit videos (2.C An alternative: the for loop), the user can see the locations of that particular video, where the query term appears). Claim 21: Zheng, Nelson and Gunawardena teach the computer device according to claim 19, wherein in response to the answer type corresponding to the answer information being the second answer type, the displaying each piece of prerequisite information and the corresponding answer information at the associated position of the video preview information based on the display form comprises: displaying each part of answer information at the associated position of the video preview information; and displaying, in response to a trigger operation for any piece of answer information, a plurality of pieces of sub-answer information corresponding to the piece of answer information (Zheng, [Fig. 6], [0067] note the workflow 32 with search requests. The search button 47 links or opens the search UI 48, which contains a search command bar 49. With the UI 48 of FIG. 6, users can look for a specific object or objects in any of the steps of a workflow, either by typing in keywords of what he/she is looking for… The search results can be specific steps 45-01 to 45-14 or specific video segments within the steps in which the keyword is embedded such phrases in which the word is spoken or an object is displayed. The search term may also be highlighted in the results, such as in a portion of the transcribed text; i.e. specific video segments within the steps reads on sub-answer information). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Giuseppi Giuliani whose telephone number is (571)270-7128. The examiner can normally be reached Monday-Friday. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kavita Stanley can be reached at (571)272-8352. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /GIUSEPPI GIULIANI/Primary Examiner, Art Unit 2153
Read full office action

Prosecution Timeline

Aug 20, 2024
Application Filed
Jun 11, 2025
Non-Final Rejection — §103
Sep 15, 2025
Response Filed
Oct 30, 2025
Final Rejection — §103
Jan 05, 2026
Response after Non-Final Action
Feb 03, 2026
Request for Continued Examination
Feb 11, 2026
Response after Non-Final Action
Mar 05, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602410
MULTIMODAL CONTEXT SELECTION FOR LARGE LANGUAGE MODEL BASED RESOLUTIONS ADDRESSING TECHNICAL ISSUES
2y 5m to grant Granted Apr 14, 2026
Patent 12585649
CONDITIONAL BRANCHING FOR A FEDERATED GRAPH QUERY PLAN
2y 5m to grant Granted Mar 24, 2026
Patent 12561368
METHODS AND SYSTEMS FOR TENSOR NETWORK CONTRACTION BASED ON LOCAL OPTIMIZATION OF CONTRACTION TREE
2y 5m to grant Granted Feb 24, 2026
Patent 12561363
Visual Search Determination for Text-To-Image Replacement
2y 5m to grant Granted Feb 24, 2026
Patent 12536151
ACCURATE AND QUERY-EFFICIENT MODEL AGNOSTIC EXPLANATIONS
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
58%
Grant Probability
65%
With Interview (+7.2%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 279 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month