Prosecution Insights
Last updated: April 19, 2026
Application No. 19/026,442

INFORMATION PROCESSING SYSTEM, COMMUNICATION SYSTEM, AND IMAGE TRANSMISSION METHOD

Non-Final OA §102§103
Filed
Jan 17, 2025
Examiner
VO, TUNG T
Art Unit
2425
Tech Center
2400 — Computer Networks
Assignee
Ricoh Company Ltd.
OA Round
1 (Non-Final)
71%
Grant Probability
Favorable
1-2
OA Rounds
3y 2m
To Grant
86%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
639 granted / 901 resolved
+12.9% vs TC avg
Strong +16% interview lift
Without
With
+15.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
20 currently pending
Career history
921
Total Applications
across all art units

Statute-Specific Performance

§101
5.4%
-34.6% vs TC avg
§103
47.3%
+7.3% vs TC avg
§102
28.0%
-12.0% vs TC avg
§112
3.4%
-36.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 901 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 12-14, 16-26, and 28-31 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Ishikawa et al. (US 20200219300 A1). Regarding claim 12, Ishikawa teaches a communication system (figs. 1-3), comprising: a first communication terminal (20-1 of fig. 1, see details in figures 2 and 3) to display a wide-view image having a wide angle of view (22 of figs. 1 and 3, [0082] the display control unit 60 causes the display apparatus 22 to display a visual field position instruction image that includes a wide area image 112 (FIG. 5) including the visual field image (first visual field image) of the user of the user apparatus 20 and the visual field image (second visual field image) corresponding to the visual field of the other user; [0178] a live camera for capturing the content image including the wide area image); a second communication terminal (20-2 of fig. 1, 22 of fig. 3) to display the wide-view image ([0051]-[0052] the visual field of the user A may be presented to another user (referred to as user B) viewing the same content based on the saved visual field information of the user A; [0067] the plurality of users viewing the same content; [0082] to display a visual field position instruction image that includes a wide area image 112 (FIG. 5) including the visual field image (first visual field image) of the user of the user apparatus 20 and the visual field image (second visual field image) corresponding to the visual field of the other user; [0104], [0107], and [0110] the wide area image 112 is a type of visual field position instruction image indicating the positions of the visual fields of the other users (in this case, users B and C) sharing (viewing) the same content in order to present the visual fields of the other users to the user (in this case, user A), [0178] for the live camera); and an information processing system including first circuitry (21 o figs. 1 and 2, [0070]) configured to: transmit a video including the wide-view image to the first communication terminal and the second communication terminal (20-1 and 20-2 of fig. 1 and displays 22 of fig. 3 for displaying the wide-view image 112 as shown in figures 5 and 6; [0052], [0104], and [0105] users sharing (viewing) the same content that includes the vide-view image; [0082] the wide area image is displayed on the display, 22 of figs. 1 and 3; [0177] and [0178] there are a user wearing AR glasses and another user remotely viewing the content of the image of a live camera arranged in the space of the user, an aerial image including the visual fields of the user and the other user at the same time can be adopted as the wide area image and automatically displayed during the communication between the user and the other user); in response to receiving an operation ([0083] and [0169] a live image captured is caused by the trigger) to capture the wide-view image from the first communication terminal (58 of fig. 2, [0079] and [0177] disclose a camera; [0169], [0177], and [0178] to activate the live camera to capture the live image in the real space) associate viewpoint information corresponding to a viewpoint ([0063]-[0064] The visual field information includes at least one of content identification information, elapsed time information, point-of-view information, visual field center information, or visual field size information) currently being displayed at the first communication terminal with the wide-view image received from an image capturing apparatus which captures the wide-view image (100 of fig. 4, [0079, [0097], [0177], and [0178] the camera to capture the wide-view image), wherein the second communication terminal includes second circuitry (21 of fig. 2) configured to: receive the wide-view image and the viewpoint information from the information processing system ([0104]-[0105], and [0107] the wide area image 112 is a type of visual field position instruction image indicating the positions of the visual fields of the other users (in this case, users B and C) sharing (viewing) the same content in order to present the visual fields of the other users to the user (in this case, user A)); and display the wide-view image at the viewpoint corresponding to the viewpoint information ([0099] the content is shared (viewed) by three users A to C, [0104], [0107], and [0110] users viewing the same content, 112 of fig. 5; [0177] and [0178] the wide-view image is displayed on users’ displays). Regarding claim 13, Ishikawa teaches the communication system according to claim 12, wherein the first circuitry (21 of figs. 1 and 2) is further configured to, in response to receiving the operation to capture the wide- view image from the first communication terminal ([0080] and [0083] detecting the operation; [0178] the live camera): transmit a capture image instruction ([0083] and [0169] a live image in a real space, an alert in the real space may be the trigger) to the image capturing apparatus ([0079] and [0177] a camera receives instructions to capture the wide-view image, 112 of fig. 6; [0169] and [0178] the live camera to capture the live image in the real space); receive the wide-view image from the image capturing apparatus (58 of fig. 2, receiving the wide-view image from the camera, [0079], image 100 of fig. 4, [0097]); and transmit the wide-view image and the viewpoint information to the second communication terminal ([0051], [0052], and [0067] a plurality of users viewing the same content, 112 of fig. 6). Regarding claim 14, Ishikawa teaches the communication system according to claim 13, wherein the first circuitry is further configured to, in response to receiving the operation to capture the wide- view image from the first communication terminal, transmit the wide-view image and the viewpoint information to the first communication terminal (22 of figs. 1 and 3, [0169] and 0178]). Regarding claim 16, Ishikawa teaches the communication system according to claim 12, wherein the first communication terminal and the second communication terminal are registered in a communication group same with the image capturing apparatus (20-1 and 20-2 of fig.1, sharing and viewing the same content, [0104] and [0178] users with the camera). Regarding claim 17, Ishikawa teaches the communication system according to claim 16, wherein the first the circuitry is further configured to: provide the first communication terminal with a screen that allows a user of the first communication terminal to input start or stop of transmission of the wide-view image by the image capturing apparatus registered in the communication group (22 of fig. 3, [0076] the user can easily communicate with the other users right after the user starts to view the content; [0111] and [0167]); and request the image capturing apparatus to start transmitting the wide-view image in response to a transmission start request from the first communication terminal or stop transmitting the wide-view image in response to a transmission stop request from first the communication terminal ([0061] The content distribution unit 41 distributes data of content through the Internet 31 according to requests from the user apparatuses 20. The data of the content may be distributed to the user apparatuses 20 at the same timing or at different timing; [0170], [0176], and [0183] The display of the wide area image may be triggered when the user recognizes another user and starts a conversation). Regarding claim 18, Ishikawa teaches the communication system according to claim 12, wherein the wide-view image includes a spherical image ([0097]). Regarding claim 19, Ishikawa teaches the communication system according to claim 12, wherein the viewpoint indicates a center position or a range of a predetermined area of the wide-view image to be displayed ([0064] and [0182]). Regarding claim 20, Ishikawa teaches the communication system according to claim 12, wherein the first circuitry is further configured to generate a thumbnail image of the wide-view image, wherein a viewpoint of the thumbnail image is defined by the viewpoint information ([0082] a symbol image, [0154] the visual field position instruction images include the wide area image 112 (FIG. 6), symbol images, and the like). Regarding claim 21, Ishikawa teaches the communication system according to claim 20, wherein the first circuitry is further configured to transmit the thumbnail image to the second communication terminal ([0114] sharing the content). Regarding claim 22, Ishikawa teaches the communication system according to claim 20, wherein the second circuitry is further configured to: receive the thumbnail image from the information processing system (51 of fig. 2, receiving the transmitted thumbnail image); display the thumbnail image ([0114] symbol images); and display the wide-view image at the viewpoint corresponding to the viewpoint information in response to a user operation ([0118]). Regarding claim 23, Ishikawa teaches the communication system according to claim 12, wherein the second circuitry is further configured to: generate a thumbnail image of the wide-view image (([0082] a symbol image, [0114] symbol images, and [0154] the visual field position instruction images include the wide area image 112 (FIG. 6), symbol images, and the like)); and display the thumbnail image (121B and 121C fig. 7, [0118]), wherein a viewpoint of the thumbnail image is defined by the viewpoint information ([0063]-[0064], and [0185]). Regarding claim 24, Ishikawa teaches the communication system according to claim 23, wherein the second circuitry is further configured to transmit the thumbnail image to the information processing system (21 of fig. 2, [0114] and [0118]), and the information processing system transmits the thumbnail image to the first communication terminal ([0067] The communication management unit 43 manages communication, such as exchange of messages using voice or characters, between the users viewing the same content; [0114]-[0118]). Regarding claim 25, see analysis in claim 1. Regarding claims 26 and 28-30, see analysis in claims 13, 16, 18, and 19. Regarding claim 31, see analysis in claim 1. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 15 and 27 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ishikawa (US 20200219300 A1) in view of Matsuo et al. (US 20220335693 A1). Regarding claim 15, Ishikawa teaches the communication system according to claim 12, Ishikawa does not teach wherein the first circuitry is further configured to, in response to receiving a notification of completion of storage of the wide-view image in a storage destination, transmit location information indicating the storage destination to the second communication terminal. Matsuo teaches wherein the first circuitry is further configured to, in response to receiving a notification of completion of storage of the wide-view image in a storage destination, transmit location information indicating the storage destination to the second communication terminal ([0126] When receiving the upload of the updated information, the sharing server 20 may notify MR terminals 10 other than the MR terminal 10 that has transmitted the updated information that there is an update, such that these MR terminals 10 can also use the updated 3D information). Taking the teachings of Ishikawa and Matsuo together as a whole, it would have been obvious to one of ordinary skill in the art at the time of invention to modify the notification to the users of Matsuo to the first circuit of Ishikawa for the processing time of the processor can be shortened, and the waste of memory life due to updating of the memory content of the 3D information storage unit can be reduced ([0118] of Matsuo). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Horio et al. (US 20180167422 A1) discloses the remote support server 20, to which the request of image signal transmission has been assigned, starts transmission, to the supporter terminal 50, of image signals from the worker terminals 30A to 30C of the sites 3A to 3C that are associated with the account of the supporter 5A as the site 3 that is taken charge by the supporter 5A. The display of the supporter terminal 50 then changes from the login screen 200 to a menu screen 210. Kim et al. (US 20180068489 A1) discloses the processor 200 may group the plurality of user terminal devices into one group, and transmit the image of the same view point region to the user terminal devices 100 belonging to the same group. For example, the processor 220 may group first and second user terminal devices having motion information between 10° and 15° in the right direction into one group, and transmit the image of the same view point region corresponding to the corresponding motion information to the first and second user terminal devices. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to TUNG T VO whose telephone number is (571)272-7340. The examiner can normally be reached Monday-Friday 6:30 AM - 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Pendleton can be reached at 571-272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. TUNG T. VO Primary Examiner Art Unit 2425 /TUNG T VO/Primary Examiner, Art Unit 2425
Read full office action

Prosecution Timeline

Jan 17, 2025
Application Filed
Mar 18, 2025
Response after Non-Final Action
Jan 23, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603995
Video Coding Using Multi-resolution Reference Picture Management
2y 5m to grant Granted Apr 14, 2026
Patent 12598278
SINGLE 2D DIGITAL IMAGE CAPTURE SYSTEM PROCESSING, DISPLAYING OF 3D DIGITAL IMAGE SEQUENCE
2y 5m to grant Granted Apr 07, 2026
Patent 12593024
HEAD-UP DISPLAY DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12593020
SINGLE 2D IMAGE CAPTURE SYSTEM, PROCESSING & DISPLAY OF 3D DIGITAL IMAGE
2y 5m to grant Granted Mar 31, 2026
Patent 12587624
FINAL VIEW GENERATION USING OFFSET AND/OR ANGLED SEE-THROUGH CAMERAS IN VIDEO SEE-THROUGH (VST) EXTENDED REALITY (XR)
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
71%
Grant Probability
86%
With Interview (+15.6%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 901 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month