Prosecution Insights
Last updated: April 19, 2026
Application No. 18/792,858

DEVICE MANAGEMENT SYSTEM, INFORMATION PROCESSING METHOD, INFORMATION PROCESSING SERVER, AND NON-TRANSITORY RECORDING MEDIUM

Non-Final OA §102§103
Filed
Aug 02, 2024
Examiner
LEE, GENE W
Art Unit
2624
Tech Center
2600 — Communications
Assignee
Ricoh Company Ltd.
OA Round
2 (Non-Final)
74%
Grant Probability
Favorable
2-3
OA Rounds
2y 4m
To Grant
84%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
479 granted / 652 resolved
+11.5% vs TC avg
Moderate +11% lift
Without
With
+10.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
18 currently pending
Career history
670
Total Applications
across all art units

Statute-Specific Performance

§101
1.7%
-38.3% vs TC avg
§103
46.1%
+6.1% vs TC avg
§102
25.7%
-14.3% vs TC avg
§112
21.9%
-18.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 652 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Note that citations to figures and elements should be understood to also implicitly refer to any pertinent explanatory text in the reference. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 3-9, 11-13, and 15-19, 21 are rejected, claims 1 and 3-5 in the alternative, under 35 U.S.C. 102(a)(1) as being anticipated by US 2012/0127323 A1 (Kasuya). Regarding claim 1, Kasuya teaches a device management system (Abstract; Fig. 1) comprising: a camera that captures a surrounding image of a surrounding area (Fig. 1 at 4); an operation detector that detects an operation by a user (Fig. 2 at 14); circuitry configured to, in response to the operation by the user ([75]), generate a partial image of the surrounding image based on a plurality of predetermined images included in the surrounding image ([36]; Figs. 3, 12); and a display that displays the partial image (Fig. 1 at 2 OR 6), wherein the circuitry is further configured to determine an area for the partial image in the surrounding image based on a range defined by the plurality of predetermined images ([36], [43], [48]; Fig. 6). Regarding claim 3, Kasuya teaches wherein the plurality of predetermined images indicate a first end and a second end of the partial image in the surrounding image (Fig. 6), and the circuitry is configured to generate the partial image that includes the plurality of predetermined images ([71]-[72], Fig. 6). Regarding claim 4, Kasuya teaches wherein the plurality of predetermined images are two-dimensional codes (Fig. 5 at 40, 41; Fig. 6 at 51A-51D). Regarding claim 5, Kasuya teaches wherein each of the plurality of predetermined images is an image displayed on an object located in the partial image of the surrounding image (Fig. 6 at 51A-51D, Fig. 1 at 2). Regarding claim 6, Kasuya teaches wherein the operation detector detects an instruction by the user to end generating the partial image of the surrounding image based on the plurality of predetermined images included in the surrounding image ([77]). Regarding claim 7, Kasuya teaches a device adapted to connect to a terminal including a display (Abstract; Fig. 1), the device comprising circuitry (Fig. 1 at 5; Fig. 2) configured to: acquire a surrounding image captured by a camera ([30], [33]; Fig. 1 at 4, 5); in response to an operation by a user ([75]), generate a partial image of the surrounding image based on a plurality of predetermined images included in the surrounding image ([36]; Figs. 3, 12); and transmit the partial image to the terminal ([71]), wherein the circuitry is further configured to determine an area for the partial image in the surrounding image based on a range defined by the plurality of predetermined images ([36], [43], [48]; Fig. 6). Regarding claim 8, Kasuya teaches a method (Abstract; Fig. 1) comprising: acquiring a surrounding image captured by an image-capturing device ([30], [33]; Fig. 1 at 4, 5); in response to an operation by a user ([75]), generating a partial image of the surrounding image based on a plurality of predetermined images included in the surrounding image ([36]; Figs. 3, 12); and transmitting the partial image to a terminal including a display ([71]), wherein an area for the partial image in the surrounding image is determined based on a range defined by the plurality of predetermined images ([36], [43], [48]; Fig. 6). Regarding claim 9, Kasuya teaches a non-transitory recording medium storing a plurality of instructions which, when executed by one or more processors, cause the one or more processors to perform the method according to claim 8 ([34]-[35]; Fig. 2). Regarding claim 11, Kasuya teaches wherein the plurality of predetermined images indicate a first end and a second end of the partial image in the surrounding image (Fig. 6), and the circuitry is configured to generate the partial image that includes the plurality of predetermined images ([71], Fig. 6). Regarding claim 12, Kasuya teaches wherein the plurality of predetermined images are two-dimensional codes (Fig. 5 at 40, 41; Fig. 6 at 51A-51D). Regarding claim 13, Kasuya teaches wherein each of the plurality of predetermined images is an image displayed on an object located in the partial image of the surrounding image (Fig. 6 at 51A-51D, Fig. 1 at 2). Regarding claim 15, Kasuya teaches wherein the operation detector detects an instruction by the user to end generating the partial image of the surrounding image based on the plurality of predetermined images included in the surrounding image ([77]). Regarding claim 16, Kasuya teaches a device adapted to connect to a terminal including a display (Abstract; Fig. 1), the device comprising: a camera (Fig. 1 at 4); and circuitry (Fig. 1 at 3, 4, 5; Fig. 2) configured to: acquire a surrounding image captured by the camera ([30], [33]; Fig. 1 at 4, 5); in response to an operation by a user ([75]), clip a part of the surrounding image based on a plurality of predetermined images included in the surrounding image and generate a clipped image ([36]; Figs. 3, 12); and transmit the clipped image to the terminal ([71]), wherein the circuitry is further configured to determine the part to be clipped in the surrounding image based on a range defined by the plurality of predetermined images ([36], [43], [48]; Fig. 6). Regarding claim 17, Kasuya teaches wherein the plurality of predetermined images indicate a first end and a second end of the part to be clipped in the surrounding image (Fig. 6), and the circuitry is configured to generate the clipped image by clipping the part that includes the plurality of predetermined images ([71], Fig. 6). Regarding claim 18, Kasuya teaches wherein the predetermined images are two-dimensional codes (Fig. 5 at 40, 41; Fig. 6 at 51A-51D). Regarding claim 19, Kasuya teaches wherein each of the plurality of predetermined images is an image displayed on an object located in the part to be clipped in the surrounding image (Fig. 6 at 51A-51D, Fig. 1 at 2). Regarding claim 21, Kasuya teaches wherein the operation detector detects an instruction by the user to end clipping the part of the surrounding image based on the plurality of predetermined images included in the surrounding image ([77]). Claims 1, 3-5 and 22 are rejected, claims 1 and 3-5 in the alternative, under 35 U.S.C. 102(a)(1) as being anticipated by JP 2014010568 A (Ujiie). Regarding claim 1, Ujiie teaches a device management system comprising: a camera that captures a surrounding image of a surrounding area ([48]); an operation detector that detects an operation by a user ([114]); circuitry configured to, in response to the operation by the user, generate a partial image of the surrounding image based on a plurality of predetermined images included in the surrounding image ([116]-[117]); and a display that displays the partial image ([118]), wherein the circuitry is further configured to determine an area for the partial image in the surrounding image based on a range defined by the plurality of predetermined images ([116]-[117]). Regarding claim 3, Ujiie teaches wherein the plurality of predetermined images indicate a first end and a second end of the partial image in the surrounding image, and the circuitry is configured to generate the partial image that includes the plurality of predetermined images ([116]-[117]). Regarding claim 4, Ujiie teaches wherein the plurality of predetermined images are two-dimensional codes ([193]). Regarding claim 5, Ujiie teaches wherein each of the plurality of predetermined images is an image displayed on an object located in the partial image of the surrounding image ([66], [80], [116]-[117]). Regarding claim 22, Ujiie teaches a terminal including a display (Fig. 2A(A) at 4), the terminal comprising circuitry configured to: acquire a surrounding image captured by a camera ([48]); in response to an operation by a user ([114]), clip a part of the surrounding image based on a plurality of predetermined images included in the surrounding image and generate a clipped image ([116]-[117]); and display the clipped image in the display ([118]), wherein the circuitry is further configured to determine the part to be clipped in the surrounding image based on a range defined by the plurality of predetermined images ([116]-[117]). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 10, 14, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over US 2012/0127323 A1 (Kasuya) as applied to claims 5, 13, and 19, respectively, above, and further in view of US 2017/0068321 A1 (Kuo). Regarding claim 10, Kasuya does not expressly teach wherein the operation by the user is performed with respect to the object. Kuo teaches wherein the operation by the user is performed with respect to the object ([18]). The suggestion to modify the teaching of Kasuya by the teaching of Kuo is present as both Kasuya and Kuo teach a projector and camera system, each with different forms of input. The motivation is to provide additional means of input for a user. The combination would have been unsurprising and had a reasonable expectation of success because both Kasuya and Kuo teach a projector and camera system, each with different forms of input. Thus, before the effective filing date of the current application, the combination of Kasuya and Kuo would have rendered obvious, to one of ordinary skill in the art, wherein the operation by the user is performed with respect to the object. Regarding claim 14, Kasuya does not expressly teach wherein the operation by the user is performed with respect to the object. Kuo teaches wherein the operation by the user is performed with respect to the object ([18]). The suggestion to modify the teaching of Kasuya by the teaching of Kuo is present as both Kasuya and Kuo teach a projector and camera system, each with different forms of input. The motivation is to provide additional means of input for a user. The combination would have been unsurprising and had a reasonable expectation of success because both Kasuya and Kuo teach a projector and camera system, each with different forms of input. Thus, before the effective filing date of the current application, the combination of Kasuya and Kuo would have rendered obvious, to one of ordinary skill in the art, wherein the operation by the user is performed with respect to the object. Regarding claim 20, Kasuya does not expressly teach wherein the operation by the user is performed with respect to the object. Kuo teaches wherein the operation by the user is performed with respect to the object ([18]). The suggestion to modify the teaching of Kasuya by the teaching of Kuo is present as both Kasuya and Kuo teach a projector and camera system, each with different forms of input. The motivation is to provide additional means of input for a user. The combination would have been unsurprising and had a reasonable expectation of success because both Kasuya and Kuo teach a projector and camera system, each with different forms of input. Thus, before the effective filing date of the current application, the combination of Kasuya and Kuo would have rendered obvious, to one of ordinary skill in the art, wherein the operation by the user is performed with respect to the object. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to GENE W LEE whose telephone number is (571)270-7148. The examiner can normally be reached M-F 9:45am-6:15pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, LunYi Lao can be reached at 571-272-7671. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Gene W Lee/Primary Examiner, Art Unit 2621
Read full office action

Prosecution Timeline

Aug 02, 2024
Application Filed
Apr 18, 2025
Non-Final Rejection — §102, §103
Jun 16, 2025
Interview Requested
Jul 08, 2025
Applicant Interview (Telephonic)
Jul 08, 2025
Examiner Interview Summary
Jul 23, 2025
Response Filed
Nov 20, 2025
Request for Continued Examination
Nov 28, 2025
Response after Non-Final Action
Dec 27, 2025
Non-Final Rejection — §102, §103
Feb 27, 2026
Examiner Interview Summary
Feb 27, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586503
INTERPOLATION AMPLIFIER AND SOURCE DRIVER COMPRISING THE SAME
2y 5m to grant Granted Mar 24, 2026
Patent 12579958
DEVICE AND METHOD FOR TRANSITION BETWEEN LUMINANCE LEVELS
2y 5m to grant Granted Mar 17, 2026
Patent 12573331
DISPLAY DEVICE AND DRIVING METHOD THEREOF
2y 5m to grant Granted Mar 10, 2026
Patent 12567352
METHOD, APPARATUS, DEVICE, AND STORAGE MEDIUM FOR COMPENSATING DISPLAY PANEL
2y 5m to grant Granted Mar 03, 2026
Patent 12567384
Circuit Device And Display System
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

2-3
Expected OA Rounds
74%
Grant Probability
84%
With Interview (+10.7%)
2y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 652 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month