Prosecution Insights
Last updated: April 19, 2026
Application No. 18/104,798

DISPLAY SYSTEM, DISPLAY METHOD, AND NON-TRANSITORY RECORDING MEDIUM

Non-Final OA §103
Filed
Feb 02, 2023
Examiner
LAEKEMARIAM, YOSEF K
Art Unit
2691
Tech Center
2600 — Communications
Assignee
Ricoh Company Ltd.
OA Round
3 (Non-Final)
82%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
97%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
792 granted / 961 resolved
+20.4% vs TC avg
Moderate +14% lift
Without
With
+14.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
32 currently pending
Career history
993
Total Applications
across all art units

Statute-Specific Performance

§101
2.6%
-37.4% vs TC avg
§103
71.5%
+31.5% vs TC avg
§102
8.3%
-31.7% vs TC avg
§112
6.3%
-33.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 961 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/22/2026 has been entered. Claim Rejections - 35 USC § 103 1. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 2. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 3. Claim(s) 1-2, 4, 7, 9-10, 12-13, 15-22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ishtiaq et al (US 2015/0082349) in view of Hegde et al (10,798,341). Consider claims 1, 12-13, 21. Ishtiaq et al teaches a display system comprising: circuitry configured to generate a record of a teleconference (abstract and Paragraphs: 0083, 0114: stored records such as audio/video content generated and communicates with client device upon query by a user); display, on a display, the record of the teleconference (e.g. video content); display text data with the record of the teleconference, the text data being generated by speech recognition of audio data of the record of the teleconference (para 3, 6, 34, 57 and 0119: Ishtiaq discusses how text generated by applying speech recognition to the audio; and the video data can include constituent visual data, audio data, and, in some instances, textual data (e.g., closed captioning data). As users experience other video technologies, they expect more functionality and experiences from their TV content providers. More specifically, users expect the ability of searching for content, watching content in a non-linear manner, or watching only the content that interests them); and in a case where one of the text data generated by the speech recognition of the audio data of the record of the record of the teleconference is selected (Paragraphs: 0119, 0021, 0064 and 0067: Ishtiaq discusses how text generated by applying speech recognition to the audio track of a program, and the like; and how the user selects the desired segment or a short audio/video clip playback from the segment to watch), display a scene of the record of the teleconference corresponding to a time associated with the selected one of the text data (para 62-63, para 90: At 510, continuing in the above example, user interface engine 121 may display a listing of the video programs (e.g., visual, audio, and textual features of the video content) matching the returned identifiers. At 512, user interface engine 121 may permit the user to select a video program or user interface engine 121 may automatically choose a video program. Upon selection of a video program, at 514, segment searcher 137 searches the text records for the chosen video program. In other embodiments, the listing of video programs may not be displayed and the following process is performed for all video programs). Ishtiag does not explicitly teach the record of the teleconference including screen information having been displayed during the teleconference, surrounding image information having been acquired during the teleconference and representing an image of surroundings, and talker image information cut out from the surrounding image information and representing a person speaking during the teleconference. Hegde teaches the record of the teleconference including screen information having been displayed during the teleconference, surrounding image information having been acquired during the teleconference and representing an image of surroundings, and talker image information cut out from the surrounding image information and representing a person speaking during the teleconference (see fig 6A, 6C, current speaker 604, the background images behind each conferees; col. 8, line 44 to col. 9, line 3; col. 9, line 62 to col. 10, line 25). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of Hegde into the teachings of Ishtiag in order to provide user interfaces that allow meeting participants to capture highlights and to display those highlights, either during an active video conference or a replay or summary thereof. Consider claim 2. Hedge further teaches the screen information is obtained as an image of a window displayed by an application (see fig. 6A-C, F, G, 8A-C). Consider claim 4. Hedge further teaches the surrounding image information is transmitted from a device installed at a site corresponding to the teleconference (e.g., The user interface may also include an on-screen display of aggregated highlights of previously recorded video conferences in which the aggregated highlights of the previously recorded video conferences are grouped by meeting in a timeline fashion; col. 2, lines 1-22). Consider claim 7. Ishtiag further teaches wherein the circuitry is configured to scroll the text data so as to display the text data in association with a display time of the record (para 62- 63, para 90: At 510, continuing in the above example, user interface engine 121 may display a listing of the video programs (e.g., visual, audio, and textual features of the video content) matching the returned identifiers. At 512, user interface engine 121 may permit the user to select a video program or user interface engine 121 may automatically choose a video program. Upon selection of a video program, at 514, segment searcher 137 searches the text records for the chosen video program. In other embodiments, the listing of video programs may not be displayed and the following process is performed for all video programs). Consider claim 9. Hedge further teaches wherein the circuitry is configured to: receive an operation by a user; and switch a content to be displayed to a switched content selected from a) the surrounding image information and the talker image information, b) the screen information, and c) the talker image information and the screen information in accordance with a switching operation of a user (see fig. 6A-C, F, G, 8A-C; e.g., switching between screen 602, fig 6A to screen information in fig 6B; switching to display all text in fig. 6E). Consider claim 10. Hedge further teaches wherein the circuitry is configured to: acquire a display time of the record at a reception of the switching operation (see fig. 6A, multiple times were displayed; fig. 6C, multiple times were displayed in other viewed mode); and after switching the content in accordance with the switching operation (see fig. 6A, multiple times were displayed; fig. 6C, multiple times were displayed), display the switched content from a scene corresponding to the display time acquired at the reception of the switching operation (see fig. 6A, multiple times were displayed; fig. 6C, multiple times were displayed in other viewed mode). Consider claim 15. Hedge further teaches the record of the teleconference includes a video (see fig 6A, 6C, current speaker 604, the background images behind each conferees; col. 8, line 44 to col. 9, line 3; col. 9, line 62 to col. 10, line 25). Consider claim 16. Ishtiaq further teaches wherein the circuitry is configured to: play the video corresponding to the text data; and in a case where the one of the text data is selected, play the video corresponding to the time associated with the selected one of the text data (para 62-63, para 90: At 510, continuing in the above example, user interface engine 121 may display a listing of the video programs (e.g., visual, audio, and textual features of the video content) matching the returned identifiers. At 512, user interface engine 121 may permit the user to select a video program or user interface engine 121 may automatically choose a video program. Upon selection of a video program, at 514, segment searcher 137 searches the text records for the chosen video program. In other embodiments, the listing of video programs may not be displayed and the following process is performed for all video programs). Consider claim 17. Hedge further teaches wherein the record of the teleconference includes audio (e.g., The recorded audio-video content from that meeting; col. 9, line 62 to col. 10, line 25). Consider claim 18. Hedge further teaches wherein the screen information includes: information for displaying the record of the teleconference in a first display field, and information for displaying the text data in a second display field that does not overlap the first display field (see fig. 6A-C, F, G, 8A-C; the video is placed on the left of the screen. The text is placed on the right of the screen). Consider claim 19. Hedge further teaches wherein the screen information includes information for scrolling the text data in the second display field (see the bottom of the screen for "scroll down to View more"). Consider claim 20. Hedge further teaches wherein the one of the text data is selected when a character string of the text data displayed in the second display field is selected (see the below image, multiple texts were selected at the top (826)). Considering claim 22, Ishtiaq further teaches the system according to claim 13, wherein the circuitry is configured to, in response to the selection of the one of the text data, play the audio data of the record replayed from a timing of an utterance of the selected one of the text data, the timing being represented as a time elapsed from a start of the recording (Paragraphs: 0034, 0062-0064 and 0067: Ishtiaq teaches how textual representations of the segments can include one or more user interface elements through which a user can enter user input to select the playback of specific segments to watch; and how a text record comprises at least a start time and a representation of the text itself, where the start time indicates the point in time within the video content at which the text occurs, i.e. obvious to select one of the text data represented as a time elapsed from a start of the recording). 5. Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ishtiaq et al (US2015/0082349) in view of Hegde et al (10,798,341) as applied to claim 1 above, and further in view of Ishikawa et al (US2022/0253198). Consider claim 3. Ishtiaq in view of Hegde does not explicitly teach wherein the surrounding image information represents an image of a 360-degree area. Ishikawa teaches wherein the surrounding image information represents an image of a 360-degree area (e.g., display three-dimensional virtual space, para 10, 11). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of Ishikawa into the teachings of Ishtiaq in view of Hegde in order to provide a technique for generating a virtual three-dimensional space, setting a viewpoint (virtual camera position) in the three-dimensional space, and displaying the three-dimensional space, which is viewed from the virtual camera position, as a 3D image. 6. Claim(s) 6, 8, 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ishtiaq et al (US2015/0082349) in view of Hegde et al (10,798,341) as applied to claims 1, 13 above, and further in view of Dagtas (6,973,256). Considering claim 6, Hedge further teaches the display system according to claim 1, wherein the audio data of the record of the teleconference includes audio data recorded by a device installed at a site corresponding to the teleconference and audio data output by a communication terminal participating in the teleconference (e.g., conference system in fig.1, col. 4, line 48 to col. 5, line 18; e.g., recorded video conference, col. 2, lines 48-60). However, Ishtiaq in view of Hegde does not explicitly teach the text data is obtained by speech recognition of the combined data teleconference. Dagtas teaches the text data is obtained by speech recognition of the combined data teleconference (col. 8, lines 54-61). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of Dagtas into the teachings of Ishtiaq in view of Hegde in order to provide a video playback device capable of identifying the highlights in a recorded video program and selectively playing back the highlights in response to a subsequent viewer request. Consider claims 8, 14. Ishtiaq in view of Hegde does not explicitly teach wherein the circuitry is configured to: search the text data for a keyword; and display a matched text retrieved, as a match with the keyword from the text data, for selection. Dagtas teaches wherein the circuitry is configured to: search text data for a keyword; and display a matched text retrieved, as a match with the keyword from the text data, for selection (col. 8, lines 54-61). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of Dagtas into the teachings of Ishtiaq in view of Hegde in order to provide a video playback device capable of identifying the highlights in a recorded video program and selectively playing back the highlights in response to a subsequent viewer request. Response to Arguments Applicant's arguments filed 01/22/2026 have been fully considered but they are not persuasive. Applicants argued neither reference of the record disclose generating text via speech recognition of the audio of the record of the teleconference where that generated text is selected to display a scene or play audio. Examiner respectfully disagrees. The prior arts of the record disclose how text generated by applying speech recognition to the audio track of a program, and the like; and how the user selects the desired segment or a short audio/video clip playback from the segment to watch; and displayed along with controls for playing the segments from beginning to end (Ishtiaq: Paragraphs: 0119, 0021, 0064 and 0067). Therefore, the prior arts of the record disclosed the argued claims limitations. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to YOSEF K LAEKEMARIAM whose telephone number is (571)270-5149. The examiner can normally be reached 9:30-6:30 M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Duc Nguyen can be reached at (571) 272-7503. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /YOSEF K LAEKEMARIAM/ Examiner, Art Unit 2691 02/02/2026
Read full office action

Prosecution Timeline

Feb 02, 2023
Application Filed
Jan 27, 2025
Non-Final Rejection — §103
May 05, 2025
Interview Requested
May 20, 2025
Applicant Interview (Telephonic)
May 20, 2025
Examiner Interview Summary
Jun 02, 2025
Response Filed
Oct 20, 2025
Final Rejection — §103
Dec 08, 2025
Interview Requested
Jan 22, 2026
Request for Continued Examination
Jan 29, 2026
Response after Non-Final Action
Feb 02, 2026
Non-Final Rejection — §103
Mar 31, 2026
Interview Requested
Apr 09, 2026
Applicant Interview (Telephonic)
Apr 09, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604140
SYSTEM AND METHOD FOR CROSS-FADING AUDIO SIGNALS
2y 5m to grant Granted Apr 14, 2026
Patent 12598443
SYSTEM AND METHOD OF PROVIDING FADED AUDIO EXPERIENCE DURING TRANSITION BETWEEN ENVIRONMENTS
2y 5m to grant Granted Apr 07, 2026
Patent 12593007
SECURE VIDEO VISITATION SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12593187
MEASUREMENT SYSTEM AND MEASUREMENT METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12570197
IN-VEHICLE CONVERSATION DEVICE
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
82%
Grant Probability
97%
With Interview (+14.4%)
2y 9m
Median Time to Grant
High
PTA Risk
Based on 961 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month