Prosecution Insights
Last updated: April 18, 2026
Application No. 18/743,623

INFORMATION PROCESSING SYSTEM AND INFORMATION PROCESSING METHOD

Final Rejection §103
Filed
Jun 14, 2024
Examiner
TEKLE, DANIEL T
Art Unit
2481
Tech Center
2400 — Computer Networks
Assignee
Sumitomo Construction Machinery Co. Ltd.
OA Round
2 (Final)
63%
Grant Probability
Moderate
3-4
OA Rounds
3y 4m
To Grant
56%
With Interview

Examiner Intelligence

Grants 63% of resolved cases
63%
Career Allow Rate
462 granted / 732 resolved
+5.1% vs TC avg
Minimal -7% lift
Without
With
+-6.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
46 currently pending
Career history
778
Total Applications
across all art units

Statute-Specific Performance

§101
8.7%
-31.3% vs TC avg
§103
46.9%
+6.9% vs TC avg
§102
33.5%
-6.5% vs TC avg
§112
4.1%
-35.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 732 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments and amendments received February 05, 2026 have been fully considered. With regard to 35 U.S.C. § 102, Applicant argues that the cited prior art does not disclose [see applicant argument pages 8-11]. This language corresponds to the newly amended language of claims 1-17 and 18. As such, these have been considered but they are directed to newly amended language, which is addressed below. See the rejection below for how the art on record reads on the newly amended language as well as the examiner's interpretation of the cited art in view of the presented claim set. Furthermore, in response to applicant argument the use of multiple circuity, see the new ground rejection under 35 USC § 103 as outlined below. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-18 are rejected under 35 U.S.C. 103 as being unpatentable over Izumikawa US 2018/0182120. In regarding to claim 1 Izumikawa teaches: 1. An information processing system comprising: a shovel including an imaging device; [0074] As illustrated in FIG. 4A, when the camera S6 as the stereo camera is attached to the shovel, the stereo pair image capturing unit 507 obtains a pair of camera images captured at the same time by a pair of image capturing units S61 and S62 of the camera S6 as the stereo-pair images. Then, based on the shift between a pixel of one of the pair of the captured camera images corresponding to the measurement point P and another pixel of the other of the pair of the captured camera images corresponding to the measurement point P, and the distance L between the image capturing unit S61 and the image capturing unit S62, the distance between the camera S6 and the measurement point P is obtained using the triangulation method. Izumikawa, 0074, emphasis added and first circuitry; and an information processing device disposed separately from the shovel and connected to the shovel through a communication network, the information processing device including second circuitry, Izumikawa, Fig. 3 wherein the first circuitry configured to transmit data to the information processing device, the data including an image captured by the imaging device and at least one of a direction in which the shovel is oriented when the image is captured, a time when the image is captured, or a position of the shovel when the image is captured in association with the image, [0074] As illustrated in FIG. 4A, when the camera S6 as the stereo camera is attached to the shovel, the stereo pair image capturing unit 507 obtains a pair of camera images captured at the same time by a pair of image capturing units S61 and S62 of the camera S6 as the stereo-pair images. Then, based on the shift between a pixel of one of the pair of the captured camera images corresponding to the measurement point P and another pixel of the other of the pair of the captured camera images corresponding to the measurement point P, and the distance L between the image capturing unit S61 and the image capturing unit S62, the distance between the camera S6 and the measurement point P is obtained using the triangulation method. [0075] Alternatively, as illustrated in FIG. 4B, when the camera S6a as the monocular camera is attached to the shovel, the stereo pair image capturing unit 507 obtains two camera images captured by an image capturing unit S61a of the camera S6a at different timings as the stereo-pair images. For example, the stereo pair image capturing unit 507 obtains, as the stereo-pair images, a first camera image captured when the camera S6a is at the position indicated by the solid line and a second camera image captured when the camera S6a subsequently moves to the position indicated by the dashed line. In this case, the movement of the camera S6a is made, for example, by traveling of the shovel main body. Then, the stereo pair image capturing unit 507 determines the shift amount L of the camera S6a from the positioning information of the GNSS, and obtains the distance between the camera S6a and the measurement point P using triangulation method, similar to the case of FIG. 4A. Izumikawa, 0074-0076, emphasis added and the second circuitry is configured to perform, in response to designation of at least one of the direction, the time, or the position, control for displaying the image in association with at least a designated one of the direction, the time, or the position. [0075] Alternatively, as illustrated in FIG. 4B, when the camera S6a as the monocular camera is attached to the shovel, the stereo pair image capturing unit 507 obtains two camera images captured by an image capturing unit S61a of the camera S6a at different timings as the stereo-pair images. For example, the stereo pair image capturing unit 507 obtains, as the stereo-pair images, a first camera image captured when the camera S6a is at the position indicated by the solid line and a second camera image captured when the camera S6a subsequently moves to the position indicated by the dashed line. In this case, the movement of the camera S6a is made, for example, by traveling of the shovel main body. Then, the stereo pair image capturing unit 507 determines the shift amount L of the camera S6a from the positioning information of the GNSS, and obtains the distance between the camera S6a and the measurement point P using triangulation method, similar to the case of FIG. 4A. Izumikawa, 0074-0075 and Fig. 3 display device D3, emphasis added. However, Izumikawa fail to explicitly teach having multiple circuitry used for separate tasks. Official Notice is taken that both the concept and the advantage of using multiple circuitries for separate tasks are well known and expected in the art. Thus, it would have been obvious to one skilled in the art, at the time of the applicant’s invention, to utilize said feature within said system taught by Izumikawa., because such incorporation would result in producing quality processing data since using more than one circuitry expect to handle more than one load simultaneously. Note: The motivation that was applied to claim 1 above, applies equally as well to claim 2-18 as presented blow. In regarding to claim 2 Izumikawa teaches: 2. The information processing system according to claim 1, wherein the shovel includes a plurality of the imaging devices oriented in different imaging directions, and wherein the circuitry is configured to transmit the data to the information processing device, the data including an image captured by each of the plurality of imaging devices. [0079] For example, the stereo pair image capturing unit 507 obtains a pair of camera images captured by the camera S6 when the shovel is directed in a reference direction as indicated in FIG. 6A by the solid line, as stereo-pair images. The overlapping image capturing range R1 represents a overlapping image capturing range of the image capturing ranges of the pair of the camera images captured by the camera S6 at this time. [0080] After that, the stereo pair image capturing unit 507 obtains a pair of camera images captured by the camera S6 when the upper turning body 3 turns in the right direction by the turning angle α as depicted by the dashed line in FIG. 6A, as stereo-pair images. The overlapping image capturing range R2 represents an overlapping image capturing range of the image capturing ranges of the pair of the camera images captured by the camera S6 at this time. Izumikawa, 0051-0052 and 0079-0080, emphasis added. In regarding to claim 3 Izumikawa teaches: 3. The information processing system according to claim 1, wherein the second circuitry is configured to perform control for displaying a screen, using the image in association with the designated position, the screen representing surroundings with the designated position as a viewpoint. [0051] The display device D3 includes a conversion processor D3a for generating an image. In the embodiment, the conversion processor D3a generates a camera image to be displayed based on an output of the camera S6. Accordingly, the display device D3 obtains, through the machine guidance device 50, an output of the camera S6 connected to the machine guidance device 50. Note that the camera S6 may be connected to the display device D3, or the camera S6 may be connected to the controller 30. [0052] The conversion processor D3a generates an image to be displayed based on an output of the controller 30 or the machine guidance device 50. In the embodiment, the conversion processor D3a converts various types of information output by the controller 30 or the machine guidance device 50 into image signals. The information output by the controller 30 includes, for example, data indicating a temperature of engine cooling water, data indicating a temperature of a hydraulic oil, data indicating a residual amount of fuel, and so forth. The information output by the machine guidance device 50 includes, data indicating a position of a front end (tip) of the bucket 6, data indicating an orientation of a sloop that is a work target, data indicating an orientation of the shovel, data indicating an operation direction for causing the shovel to normally face a slope, and so forth. Izumikawa, 0051-0052 and Fig. 6, emphasis added. In regarding to claim 4 Izumikawa teaches: 4. The information processing system according to claim 1, wherein the second circuitry is configured to perform control for displaying a screen using the image captured at the designated time. [0051] The display device D3 includes a conversion processor D3a for generating an image. In the embodiment, the conversion processor D3a generates a camera image to be displayed based on an output of the camera S6. Accordingly, the display device D3 obtains, through the machine guidance device 50, an output of the camera S6 connected to the machine guidance device 50. Note that the camera S6 may be connected to the display device D3, or the camera S6 may be connected to the controller 30. [0052] The conversion processor D3a generates an image to be displayed based on an output of the controller 30 or the machine guidance device 50. In the embodiment, the conversion processor D3a converts various types of information output by the controller 30 or the machine guidance device 50 into image signals. The information output by the controller 30 includes, for example, data indicating a temperature of engine cooling water, data indicating a temperature of a hydraulic oil, data indicating a residual amount of fuel, and so forth. The information output by the machine guidance device 50 includes, data indicating a position of a front end (tip) of the bucket 6, data indicating an orientation of a sloop that is a work target, data indicating an orientation of the shovel, data indicating an operation direction for causing the shovel to normally face a slope, and so forth. Izumikawa, 0051-0052 and 0075-0076, emphasis added. In regarding to claim 5 Izumikawa teaches: 5. The information processing system according to claim 1, wherein second circuitry is configured to, upon receiving the designation of the direction after the control for displaying a first screen using the image, perform control for displaying a second screen, the second screen representing the designated direction from the position at which the image is captured. [0052] The conversion processor D3a generates an image to be displayed based on an output of the controller 30 or the machine guidance device 50. In the embodiment, the conversion processor D3a converts various types of information output by the controller 30 or the machine guidance device 50 into image signals. The information output by the controller 30 includes, for example, data indicating a temperature of engine cooling water, data indicating a temperature of a hydraulic oil, data indicating a residual amount of fuel, and so forth. The information output by the machine guidance device 50 includes, data indicating a position of a front end (tip) of the bucket 6, data indicating an orientation of a sloop that is a work target, data indicating an orientation of the shovel, data indicating an operation direction for causing the shovel to normally face a slope, and so forth. Izumikawa, 0052 and fig. 2 item D3, D3a, emphasis added. In regarding to claim 6 Izumikawa teaches: 6. The information processing system according to claim 1, wherein the second circuitry is configured to store the data in a storage unit for each of work sites in which the shovel has performed work, the data being received from the shovel, and wherein upon a desired work site being designated, perform control for displaying the image captured at the designated work site. [0041] The storage device D4 is a device for storing various types of information. In the embodiment, a non-volatile storage medium, such as a semiconductor memory, is used as the storage device D4. The storage device D4 stores various types of information to be output by the machine guidance device 50, etc. Izumikawa, 0041, 0051-0052 and 0072, emphasis added. In regarding to claim 7 Izumikawa teaches: 7. The information processing system according to claim 1, wherein the second circuitry is configured to combine images differing in at least one of the direction, the time, or the position included in the received data to perform control for displaying a combined image. [0078] Here, an example of a capturing condition of the stereo-pair images is described by referring to FIG. 6A and FIG. 6B. FIG. 6A and FIG. 6B are top views of the shovel illustrating image capturing ranges of the camera S6. Specifically, FIG. 6A depicts overlapping image capturing ranges R1 and R2 of the camera S6; and FIG. 6B depicts blind spot regions BA1 and BA2 formed by an object B located behind the shovel. The parts indicated by the dashed lines in FIG. 6A and FIG. 6B, respectively, depict a state in which the upper turning body 3 is turned by a turning angle α around a turning axis SX. [0079] For example, the stereo pair image capturing unit 507 obtains a pair of camera images captured by the camera S6 when the shovel is directed in a reference direction as indicated in FIG. 6A by the solid line, as stereo-pair images. The overlapping image capturing range R1 represents a overlapping image capturing range of the image capturing ranges of the pair of the camera images captured by the camera S6 at this time. Izumikawa, 0051-0052 and 0078-0079, emphasis added. In regarding to claim 8 Izumikawa teaches: 8. The information processing system according to claim 7, wherein the second circuitry is configured to combine a plurality of the images captured by the shovel at a same position, at a same time, and in different directions at a time of capturing the images to generate an overhead view, and to perform control for displaying the overhead view, the overhead view representing surroundings of the shovel. [0078] Here, an example of a capturing condition of the stereo-pair images is described by referring to FIG. 6A and FIG. 6B. FIG. 6A and FIG. 6B are top views of the shovel illustrating image capturing ranges of the camera S6. Specifically, FIG. 6A depicts overlapping image capturing ranges R1 and R2 of the camera S6; and FIG. 6B depicts blind spot regions BA1 and BA2 formed by an object B located behind the shovel. The parts indicated by the dashed lines in FIG. 6A and FIG. 6B, respectively, depict a state in which the upper turning body 3 is turned by a turning angle α around a turning axis SX. [0079] For example, the stereo pair image capturing unit 507 obtains a pair of camera images captured by the camera S6 when the shovel is directed in a reference direction as indicated in FIG. 6A by the solid line, as stereo-pair images. The overlapping image capturing range R1 represents a overlapping image capturing range of the image capturing ranges of the pair of the camera images captured by the camera S6 at this time. Izumikawa, 0078-0079 and 0095, emphasis added. In regarding to claim 9 Izumikawa teaches: 9. The information processing system according to claim 7, wherein the second circuitry is configured to combine a plurality of images captured by the shovel at a same position, at different times, and in different directions to generate an overhead view, and to perform control for displaying the overhead view, the overhead view representing surroundings of the shovel. [0095] Next, by referring to FIG. 8, another example of a procedure for generating the topography data is described. FIG. 8 is a top view or the shovel illustrating ranges to be measured by three cameras S6 (a rear camera S6B, a right side camera S6R, and a left side camera S6L) when the upper turning body 3 turns in the right direction. Specifically, the measurement target ranges X1, Y1, and Z1 are measurement target ranges included in pairs of camera images captured by the rear camera S6B, the right side camera S6R, and the left side camera S6R, respectively, when the shovel is directed in the reference direction as illustrated by the solid line in FIG. 8. The same applies to the measurement target ranges X2, Y2, Z2, X3, Y3, and Z3. Izumikawa, 0095 and Figs. 6-8, emphasis added. In regarding to claim 10 Izumikawa teaches: 10. The information processing system according to claim 7, wherein the second circuitry is configured to combine a plurality of the images captured by the shovel at a same position and in different directions, based on the data received from the shovel working in a work site to generate a first overhead view for each of positions included in the work site, and to subsequently generate a second overhead view, based on the generated first overhead views, the second overhead view representing the work site in an overhead view manner. [0095] Next, by referring to FIG. 8, another example of a procedure for generating the topography data is described. FIG. 8 is a top view or the shovel illustrating ranges to be measured by three cameras S6 (a rear camera S6B, a right side camera S6R, and a left side camera S6L) when the upper turning body 3 turns in the right direction. Specifically, the measurement target ranges X1, Y1, and Z1 are measurement target ranges included in pairs of camera images captured by the rear camera S6B, the right side camera S6R, and the left side camera S6R, respectively, when the shovel is directed in the reference direction as illustrated by the solid line in FIG. 8. The same applies to the measurement target ranges X2, Y2, Z2, X3, Y3, and Z3. Izumikawa, 0095 and Figs. 6-8, emphasis added. In regarding to claim 11 Izumikawa teaches: 11. The information processing system according to claim 1, wherein the second circuitry is configured to perform control for displaying display information indicating a position of a work site on map data, and wherein the second circuitry is configured to, upon the display information being designated, the circuitry causes the information processing device to perform control for displaying the image captured by the shovel existing in the work site, the work site being indicated by the display information. [0051] The display device D3 includes a conversion processor D3a for generating an image. In the embodiment, the conversion processor D3a generates a camera image to be displayed based on an output of the camera S6. Accordingly, the display device D3 obtains, through the machine guidance device 50, an output of the camera S6 connected to the machine guidance device 50. Note that the camera S6 may be connected to the display device D3, or the camera S6 may be connected to the controller 30. [0052] The conversion processor D3a generates an image to be displayed based on an output of the controller 30 or the machine guidance device 50. In the embodiment, the conversion processor D3a converts various types of information output by the controller 30 or the machine guidance device 50 into image signals. The information output by the controller 30 includes, for example, data indicating a temperature of engine cooling water, data indicating a temperature of a hydraulic oil, data indicating a residual amount of fuel, and so forth. The information output by the machine guidance device 50 includes, data indicating a position of a front end (tip) of the bucket 6, data indicating an orientation of a sloop that is a work target, data indicating an orientation of the shovel, data indicating an operation direction for causing the shovel to normally face a slope, and so forth. Izumikawa, 0051-0052 and 0075-0076, emphasis added. In regarding to claim 12 Izumikawa teaches: 12. The information processing system according to claim 1, wherein the second circuitry is configured to perform control for displaying display information indicating a position captured by the shovel on map data, the map data representing a work site, and wherein second circuitry is configured to, upon the display information being designated, to perform control for displaying the image captured by the shovel at the position indicated by the display information. Izumikawa, 0051-0052, 0097 In regarding to claim 13 Izumikawa teaches: 13. The information processing system according to claim 1, wherein the second circuitry is configured to perform control for displaying operation information capable of setting a time span, together with displaying a first image, and wherein second circuitry is configured to upon a time span being designated by the operation information, to perform control for displaying a second image, the second image being captured from a same position at which the first image is captured and being captured in the designated time span. Izumikawa, 0095 and Figs. 6-8, In regarding to claim 14 Izumikawa teaches: 14. The information processing system according to claim 1, wherein the second circuitry is configured to capture an image by the imaging device every time the shovel moves by a predetermined distance or every time a predetermined time has elapsed. [0097] Each time the upper turning body 3 turns by a turning angle α, the stereo pair image capturing unit 507 captures three pairs of stereo-pair images and derives a distance to each measurement point in the measurement target range included in each pair of stereo-pair images. Then, the topography data generator 508 derives three-dimensional coordinates of each measurement point in the camera coordinate system based on the distance to the measurement point derived by the stereo pair image capturing unit 507. Then, the coordinate converter 509 converts three-dimensional coordinates of each measurement point in the camera coordinate system derived by the topography data generator 508 into coordinates in the World Geodetic System. In regarding to claim 15 Izumikawa teaches: 15. The information processing system according to claim 1, wherein the second circuitry is configured to further perform control for displaying display information on a screen represented based on the image, the display information representing a position at which the shovel has performed imaging, and wherein second circuitry is configured upon the display information being designated, to perform control for displaying the screen represented based on the image, the image being captured at the position indicated by the display information. [0051] The display device D3 includes a conversion processor D3a for generating an image. In the embodiment, the conversion processor D3a generates a camera image to be displayed based on an output of the camera S6. Accordingly, the display device D3 obtains, through the machine guidance device 50, an output of the camera S6 connected to the machine guidance device 50. Note that the camera S6 may be connected to the display device D3, or the camera S6 may be connected to the controller 30. [0052] The conversion processor D3a generates an image to be displayed based on an output of the controller 30 or the machine guidance device 50. In the embodiment, the conversion processor D3a converts various types of information output by the controller 30 or the machine guidance device 50 into image signals. The information output by the controller 30 includes, for example, data indicating a temperature of engine cooling water, data indicating a temperature of a hydraulic oil, data indicating a residual amount of fuel, and so forth. The information output by the machine guidance device 50 includes, data indicating a position of a front end (tip) of the bucket 6, data indicating an orientation of a sloop that is a work target, data indicating an orientation of the shovel, data indicating an operation direction for causing the shovel to normally face a slope, and so forth. Izumikawa, 0051-0052 and 0075-0076, emphasis added. In regarding to claim 16 Izumikawa teaches: 16. The information processing system according to claim 15, wherein the display information displayed on the screen is displayed in a different display mode according to the shovel that has performed imaging from the position indicated by the display information. Figs. 6-8 Claims 17-18 list all similar elements of claim 1, but in non-transitory computer-readable medium and method form rather than device form. Therefore, the supporting rationale of the rejection to claim 1 applies equally as well to claims 17-18. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL T TEKLE whose telephone number is (571)270-1117. The examiner can normally be reached Monday-Friday 8:00-4:30 ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Vaughn can be reached at 571-272-3922. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DANIEL T TEKLE/Primary Examiner, Art Unit 2481
Read full office action

Prosecution Timeline

Jun 14, 2024
Application Filed
Nov 01, 2025
Non-Final Rejection — §103
Feb 05, 2026
Response Filed
Apr 02, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602804
Method for Processing Three-dimensional Scanning, Three-dimensional Scanning Device, and Computer-readable Storage Medium
2y 5m to grant Granted Apr 14, 2026
Patent 12603969
PARKING VIDEO RECORDING DEVICE, A TELEMATICS SERVER AND A METHOD FOR RECORDING A PARKING VIDEO
2y 5m to grant Granted Apr 14, 2026
Patent 12587615
MULTI-STREAM PEAK BANDWIDTH DISPERSAL
2y 5m to grant Granted Mar 24, 2026
Patent 12573430
INTERACTIVE VIDEO ACCESSIBILITY COMPLIANCE SYSTEMS AND METHODS
2y 5m to grant Granted Mar 10, 2026
Patent 12548219
SYSTEM AND METHOD FOR HIGH-RESOLUTION 3D IMAGES USING LASER ABLATION AND MICROSCOPY
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
63%
Grant Probability
56%
With Interview (-6.9%)
3y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 732 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month