Prosecution Insights
Last updated: April 19, 2026
Application No. 18/895,458

SYSTEM AND METHOD OF CONTROLLING DISPLAY, AND RECORDING MEDIUM

Final Rejection §103
Filed
Sep 25, 2024
Examiner
XIE, KWIN
Art Unit
2626
Tech Center
2600 — Communications
Assignee
Ricoh Company Ltd.
OA Round
2 (Final)
64%
Grant Probability
Moderate
3-4
OA Rounds
2y 7m
To Grant
96%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
277 granted / 435 resolved
+1.7% vs TC avg
Strong +32% interview lift
Without
With
+32.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
16 currently pending
Career history
451
Total Applications
across all art units

Statute-Specific Performance

§101
1.5%
-38.5% vs TC avg
§103
50.0%
+10.0% vs TC avg
§102
44.0%
+4.0% vs TC avg
§112
3.3%
-36.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 435 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment As a result of the Amendment filed on November 10, 2025, claims 1-20 are pending. Claims 1, 3, 4, 6-10, 12, 13 and 17-19 are amended. New claim 20 is added. Response to Arguments Applicant’s arguments with respect to claim(s) 1-20 have been considered but are moot because the new ground of rejection does not rely on the same combination of references applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. An updated was search was conducted, but no new art is cited at this time due to reliance on previously cited references for the new rejection grounds. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over O’Leary et al., United States Patent Application Publication No. US 2023/0262317 A1 in view of Agarawala et al., United States Patent Application Publication No. US 2021/0373741 A1. Regarding claim 1, O’Leary discloses a system of controlling display (Figs. 1-4, generally), comprising: a display (Figs. 1-4, display. #450 or touch screen, #112); and circuitry configured to control the display to display a screen (Figs. 1-3, display controller, #156; Detailed Description, [0071-0075]) including a predetermined-area image representing a viewable area of a wide-view image (Figs. 5-6, Detailed Description, [0227-0229], “FIGS. 6A-6Q illustrate device 600 displaying user interfaces on display 601 (e.g., a display device or display generation component) for managing a live video communication session. FIGS. 6A-6Q depict various embodiments in which device 600 automatically reframes a displayed portion of a camera field-of-view based on conditions detected in a scene that is within the field-of-view of the camera while an automatic framing mode is enabled...device 600 includes one or more cameras 602 (e.g., front-facing cameras) for capturing image data and, optionally, depth data of a scene that is within the field-of-view of the camera. In some embodiments, camera 602 is a wide angle camera (e.g., a camera that includes a wide angle lens, or a lens that has a relatively short focal length and wide field-of-view). In some embodiments, device 600 includes one or more features of devices 100, 300, or 50…Options menu 608 includes various selectable options for controlling one or more aspects of the video conference.”), wherein, in response to detection of a first operation on the predetermined-area image, the circuitry performs first processing of displaying another predetermined-area image representing another viewable area of the wide-view image, said another viewable area reflecting a change in virtual viewpoint to the wide-view image (Figs. 6, Detailed Description, [0227-0245], “In FIG. 6D, device 600 detects input 626 (e.g., a tap input) on framing mode affordance 610. In response, device 600 bolds framing mode affordance 610 (to indicate its selected/enabled state) and enables the automatic framing mode, as shown in FIG. 6E. When automatic framing mode is enabled, device 600 automatically adjusts the displayed video feed field-of-view based on conditions detected within scene 615. In the embodiment depicted in FIG. 6E, device 600 adjusts the displayed video feed field-of-view to center on Jane's face. Accordingly, device 600 updates camera preview 606 to include representation 622-1 of Jane centered in the frame and, in the background, representation 621-1 of the couch upon which she is sitting. Field-of-view 620 remains fixed because the position of camera 602 remains unchanged. However, the position of Jane's face within field-of-view 620 does change. As a result, device 600 adjusts (e.g., repositions) the displayed portion of field-of-view 620 so that Jane remains positioned within camera preview 606.”), and in response to detection of a second operation on the predetermined-area image, the second operation being different than the first operation, the circuitry performs second processing of scrolling the predetermined-area image in the screen (Figs. 6, Detailed Description, [0164][0245-0250], “FIGS. 6H and 61 illustrate specific embodiments of a transition between different camera previews. In some embodiments, other transitions can be executed such as, for example, by continuously moving (e.g., panning and/or zooming) the video feed field-of-view within field-of-view 620 to follow Jane 622 as she moves about scene 615.”; Examiner’s note—panning is a form of scroll). O’Leary does not explicitly disclose the wide-view image is a spherical image. O’Leary also does not explicitly disclose a further image is displayed in the screen and second processing of scrolling the predetermined-area image and said further image. Agarawala, in a similar field of endeavor, discloses a system (Fig. 44, generally) comprising a wide-view image being a spherical image (Detailed Description, [0045-0048], “In an embodiment, the meshes may either lay flat (horizontal or vertical) or may be curved spherically around the user.” Figs. 8); See also Detailed Description, [0316] for wide-view images); a further image is displayed in the screen and second processing of scrolling the predetermined-area image and said further image (Detailed Description, [0048], “The user may perform any number of interactions with the data elements within the AR environment. Example interactions include summon (bring an element spatially closer, reduce its depth), push (move or toss an element spatially further, increase its depth), zoom (make an element bigger or smaller), merge (managing multiple elements or objects together as one element, or multiple meshes onto the same mesh or place them in the same area), pan (scrolling through various elements or meshes), stack/fan-out (stack elements together like playing cards, or fan out the elements of a stack so that one or more of the elements is more or fully visible), blur (reduce the visibility of a particular elements such the that element remains within the view area, but is not or is less readable), and hide/delete (removing an element from the AR display area). Users may also import or add new data elements into an existing workspace or environment”; Examiner’s note—Agarawala mentions explicitly scrolling multiple elements). It would have been obvious to one of ordinary skill in the art to have modified the circuitry of o’Leary to include the teachings of Agarawala to provide wherein the wide-view image is a spherical image, and a further image is displayed in the screen and the second processing involves scrolling the predetermined-area image and said further image. The motivation to combine these options is to utilize image meshes at different depths, which provide more information and views to the user (see inter alia, Agarwal, Detailed Description [0044-0047], “FIG. 7B illustrates an example of how different images may be arranged on either a horizontal or vertical mesh at varying depths. Furthermore, through allowing a user to arrange the AR data elements around a room or physical space, rather than a 2D physical screen, the AR system can display or make available more information at the same time than would otherwise be available using a traditional 2D display.”). The fact that O’Leary and Agarawala disclose very similar types of display systems with adjustable wide angle views for purposes of conferencing (See Agarawala, Detailed Description, [0316-0320]), and the fact O’Leary already suggests a scrolling feature (O’Leary, Detailed Description, [0164]), makes this combination more easily implemented. Regarding claim 2, O’Leary discloses the system further comprising: a connection interface to connect with an operation device, wherein the circuitry is configured to receive the first operation and the second operation from the operation device connected with the connection interface (Figs 1-3, I/O interface, #330; signal lines, #103 Detailed Description, [0063][0168]). Thus, it would have remained obvious to have combined O’Leary and Agarawala in the manner described in claim 1. Regarding claim 3, O’Leary discloses wherein the operation device is a mouse (Detailed Description, [0071], “The other input control devices 116 optionally include physical buttons (e.g., push buttons, rocker buttons, etc.), dials, slider switches, joysticks, click wheels, and so forth. In some embodiments, input controller(s) 160 are, optionally, coupled to any (or none) of the following: a keyboard, an infrared port, a USB port, and a pointer device such as a mouse”), the first operation is a drag performed on the predetermined-area image, and the second operation is a scroll wheel performed on the predetermined-area image (Detailed Description, [0164], “It shall be understood that the foregoing discussion regarding event handling of user touches on touch-sensitive displays also applies to other forms of user inputs to operate multifunction devices 100 with input devices, not all of which are initiated on touch screens. For example, mouse movement and mouse button presses, optionally coordinated with single or multiple keyboard presses or holds; contact movements such as taps, drags, scrolls, etc…any combination thereof are optionally utilized as inputs corresponding to sub-events which define an event to be recognized.”; See also Detailed Description. [0197], “7] Additionally, while the following examples are given primarily with reference to finger inputs (e.g., finger contacts, finger tap gestures, finger swipe gestures), it should be understood that, in some embodiments, one or more of the finger inputs are replaced with input from another input device (e.g., a mouse-based input or stylus input). For example, a swipe gesture is, optionally, replaced with a mouse click (e.g., instead of a contact) followed by movement of the cursor along the path of the swipe (e.g., instead of movement of the contact). As another example, a tap gesture is, optionally, replaced with a mouse click while the cursor is located over the location of the tap gesture (e.g., instead of detection of the contact followed by ceasing to detect the contact”). Thus, it would have remained obvious to have combined O’Leary and Agarawala in the manner described in claim 1. Regarding claim 4, O’Leary in combination with Agarawala discloses every element of claim 1, but O’Leary does not explicitly disclose wherein the circuitry is further configured to control the display to display a scroll bar outside the predetermined-area image, the scroll bar causing, when operated, the predetermined-area image to scroll, and move the scroll bar to a location that corresponds to a location to which the predetermined-area image is scrolled according to the second operation. Agarawala discloses a system wherein the circuitry is further configured to control the display to display a scroll bar outside the predetermined-area image, the scroll bar causing, when operated, the predetermined-area image to scroll, and move the scroll bar to a location that corresponds to a location to which the predetermined-area image is scrolled according to the second operation (Figs. 51-62; Detailed Description, [0201][0240-0250], “Scroll 6116 may be a visual or digital indicator that appears on a digital canvas that only displays a portion or subset of the digital objects 6110 that are pinned to that wall. For example, as illustrated only 2 of the 3 digital objects (from the center wall of saved meeting space 6106) appear on the center wall in opened space 6124A. User 6102 may use a hand gesture to select scroll 6116 with their fingers. The gesture may be captured by AR device 6103, received by AR environment 6104, and the remaining digital objects (or a second subset of digital objects not currently displayed) may be rendered on the scrollable canvas”; See also Agarawala claims 6, 13, 19, 20). It would have been obvious to one of ordinary skill in the art to have further modified the system of O’Leary-Agarawala to provide the teachings of a scroll bar, as taught in Agarawala, in such a way wherein the circuitry is further configured to control the display to display a scroll bar outside the predetermined-area image, the scroll bar causing, when operated, the predetermined-area image to scroll, and move the scroll bar to a location that corresponds to a location to which the predetermined-area image is scrolled according to the second operation. The motivation to combine these arts is to use a known type of user input (i.e. a scroll bar) to adjust the image view and see new objects as a result (See Agarawala, Detailed Description, [0242-0250][0277-0285], “In the example shown, AR user 2 may select scroll 6116 to see the third digital object 6312.”) The fact that O’Leary and Agarawala disclose very similar types of display systems with adjustable wide angle views for purposes of conferencing (See Agarawala, Detailed Description, [0316-0320]), and the fact O’Leary already suggests a scrolling feature (O’Leary, Detailed Description, [0164]), makes this combination more easily implemented. Regarding claim 5, O’Leary discloses a system further comprising: a touch panel to detect the first operation and the second operation based on a contact made to the touch panel by an operator (Figs. 1-3, touch-sensitive display system, #112; Detailed Description, [0130-0156], “The dragging, for example, comprises a touch (or contact) on the displayed object for a predetermined phase, a movement of the touch across touch-sensitive display 112, and liftoff of the touch (touch end). In some embodiments, the event also includes information for one or more associated event handlers 190.”). Thus, it would have remained obvious to have combined O’Leary and Agarawala in the manner described in claim 1. Regarding claim 6, O’Leary discloses wherein the first operation is operation to touch by the operator with one finger, and the second operation is operation to touch by the operator with two fingers (See Detailed Description, [0091] [0165-0170], “ In this embodiment, as well as others described below, a user is enabled to select one or more of the graphics by making a gesture on the graphics, for example, with one or more fingers 202 (not drawn to scale in the figure) or one or more styluses 203 (not drawn to scale in the figure). In some embodiments, selection of one or more graphics occurs when the user breaks contact with the one or more graphics. In some embodiments, the gesture optionally includes one or more taps, one or more swipes (from left to right, right to left, upward and/or downward), and/or a rolling of a finger (from right to left, left to right, upward and/or downward) that has made contact with device 100.”; See also Detailed Description, [0507] on pinching gestures which requires more than one finger). Thus, it would have remained obvious to have combined O’Leary and Agarawala in the manner described in claim 1. Regarding claim 7, O’Leary discloses wherein the first operation is swiping with the one finger, and the second operation is swiping with the two fingers (See Detailed Description, [0091] [0165-0170], “ In this embodiment, as well as others described below, a user is enabled to select one or more of the graphics by making a gesture on the graphics, for example, with one or more fingers 202 (not drawn to scale in the figure) or one or more styluses 203 (not drawn to scale in the figure). In some embodiments, selection of one or more graphics occurs when the user breaks contact with the one or more graphics. In some embodiments, the gesture optionally includes one or more taps, one or more swipes (from left to right, right to left, upward and/or downward), and/or a rolling of a finger (from right to left, left to right, upward and/or downward) that has made contact with device 100.”; See also Detailed Description, [0507] on pinching gestures which requires more than one finger).. Thus, it would have remained obvious to have combined O’Leary and Agarawala in the manner described in claim 1. Regarding claim 8, O’Leary discloses wherein the circuitry is further configured to control the display to display one or more icons in the predetermined-area image, the one or more icons for performing third processing, the third processing being different from the first processing and the second processing (Detailed Description, [0093-0094]; See also Detailed Description, [0171] for icon operation that is not related to changing field of view), and in response to selection of one of the one or more icons, the circuitry is configured to perform the third processing defined by the selected icon, without performing the first processing or the second processing (Detailed Description, [0093-0094]; See also Detailed Description, [0171] for icon operation that is not related to changing field of view). Thus, it would have remained obvious to have combined O’Leary and Agarawala in the manner described in claim 1. Regarding claim 9, O’Leary discloses wherein the third processing is enlarging a size of the predetermined-area image or reducing the size of the predetermined-area image (See inter alia, Detailed Description, [0218]; See also Detailed Description, [0227-0270] on both zoom in and zoom out, “In accordance with a determination that a second set of criteria is met, including that the subject is detected at a second position (e.g., 625 in FIG. 6H) different from the first position, displaying the live video communication interface having a representation of a second field-of-view (e.g., 606 in FIG. 6H) different from the representation of the first field-of-view (e.g., the live video communication interface is displayed with a second digital zoom level and/or a second displayed portion of the field-of-view of the one or more cameras) (e.g., a representation of a field-of-view that is zoomed in, zoomed out, and/or panned in a direction relative to the representation of the first field-of-view)”; See also Detailed Desertion, [0311-0325] on selectable zoom). Thus, it would have remained obvious to have combined O’Leary and Agarawala in the manner described in claim 1. Regarding claim 10, O’Leary discloses wherein the screen includes a display area, and the third processing is switching an image to be displayed in the display area, the image being one of the predetermined-area image or another image (See Figs. 6 and Detailed Description, [0227-0260], “The result of the match cut is that the camera preview 606 appears to transition from a first camera view in FIG. 6G, to a different camera view in FIG. 6H (the different camera view optionally having a same zoom level as the camera view in FIG. 6G). However, the actual field-of-view of camera 602 (e.g., field-of-view 620) has not changed. Rather, only the portion of the field-of-view that is displayed (portion 625) has changed position within field-of-view 620.”) Thus, it would have remained obvious to have combined O’Leary and Agarawala in the manner described in claim 1. Regarding claim 11, O’Leary discloses a system further comprising: a network interface to communicate with a counterpart communication terminal, wherein said another image is an image of data to be shared with the counterpart communication terminal (Figs. 1-3, peripherals interface, #118, Detailed Description, [0068-0069], “Peripherals interface 118 can be used to couple input and output peripherals of the device to CPU 120 and memory 102”; See next Figs. 6 and Detailed Description, [0082], describing multiple camera capturing different images and [0227-0260]). Thus, it would have remained obvious to have combined O’Leary and Agarawala in the manner described in claim 1. Regarding claim 12, O’Leary-Agarawala discloses a method of controlling display, comprising the structural and functional components of claim 1. Thus, claim 12 is rejected under the same reasoning as claim 1 with the same sections of O’Leary combined with Agarawala. Regarding claim 13, this is met by the rejection to claim 3. Regarding claim 14, this is met by the rejection to claim 4. Regarding claim 15, this is met by the rejection to claim 5. Regarding claim 16, this is met by the rejection to claim 8. Regarding claim 17, this is met by the rejection to claim 9. Regarding claim 18, this is met by the rejection to claim 10. Regarding claim 19, O’Leary-Agarawala discloses a non-transitory recording medium storing a plurality of instructions (O’Leary; Figs. 1-3, generally, memory, #102; See also Fig. 5 memory, #518; Detailed Description, [0203]) which, when executed by one or more processors, cause the processors to perform a method of controlling display (Figs. 1-3, processor, #120; portable multifunction device, #100; display controller, #156), comprising the limitations of claim 1 and claim 12. Thus, claim 19 is rejected under the same reasoning as claim 1 with the same sections of O’Leary combined with Agarawala. Regarding claim 20, O’Leary-Agarawala discloses every element of claim 1, and Agarawala discloses wherein said further image include shared screen data image (Detailed Description, [0068-0070], “As such, the users in a shared AR/VR space may engage in a shared experience of videos, images, or other data. In an embodiment, a user may queue up a video to be watched as a group once a specified number of users are viewing the video.”; See also Detailed Description, [0155-0159]). Thus, it would have remained obvious to have combined O’Leary and Agarawala in the same manner and rationale (i.e. enabling more depth of views and user experiences) as described in claim 1. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KWIN XIE whose telephone number is (571)272-7812. The examiner can normally be reached 9:00 AM - 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Temesghen Ghebretinsae can be reached at (571)272-3017. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KWIN XIE/Primary Examiner, Art Unit 2626
Read full office action

Prosecution Timeline

Sep 25, 2024
Application Filed
Sep 12, 2025
Non-Final Rejection — §103
Nov 10, 2025
Response Filed
Dec 05, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602132
DISPLAY DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12578813
Touch Display Substrate, Manufacturing Method Therefor, and Touch Display Device
2y 5m to grant Granted Mar 17, 2026
Patent 12578822
TOUCH COORDINATE EDGE CORRECTION
2y 5m to grant Granted Mar 17, 2026
Patent 12566469
WEARABLE ELECTRONIC DEVICE COMPRISING SENSOR, AND METHOD BY WHICH ELECTRONIC DEVICE PROCESSES TOUCH SIGNAL
2y 5m to grant Granted Mar 03, 2026
Patent 12561003
HAPTIC FEEDBACK HEADPIECE
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
64%
Grant Probability
96%
With Interview (+32.1%)
2y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 435 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month