Prosecution Insights
Last updated: April 19, 2026
Application No. 18/549,407

INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM

Final Rejection §103
Filed
Sep 07, 2023
Examiner
ALLEN, KYLA GUAN-PING TI
Art Unit
2661
Tech Center
2600 — Communications
Assignee
Sony Group Corporation
OA Round
2 (Final)
89%
Grant Probability
Favorable
3-4
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 89% — above average
89%
Career Allow Rate
47 granted / 53 resolved
+26.7% vs TC avg
Strong +17% interview lift
Without
With
+17.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
30 currently pending
Career history
83
Total Applications
across all art units

Statute-Specific Performance

§101
9.9%
-30.1% vs TC avg
§103
52.5%
+12.5% vs TC avg
§102
19.3%
-20.7% vs TC avg
§112
17.4%
-22.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 53 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The amendments to claims 1-5, 7-8, and 11-20 have been accepted and entered. Claims 6, 9, and 10 have been cancelled. Claims 1-5, 7-8, and 11-20 are pending regarding this application. Response to Arguments Applicant’s arguments, see Remarks, filed 01/08/2026, with respect to the 112(f) interpretation applied to claims 1, 6, 7, 10, 15, 16, and 19 have been fully considered and are persuasive. The 112(f) interpretation of claims 1, 6, 7, 10, 15, 16, and 19 has been withdrawn. Applicant’s arguments, see Remarks, filed 01/08/2026, with respect to the 101 rejection applied to claim 20 have been fully considered and are persuasive. The 101 rejection of claim 20 has been withdrawn. Applicant's arguments filed 01/08/2026 have been fully considered but they are not persuasive. Applicant states that “The Applicant respectfully submits that independent claim 1 has been amended to incorporate all the features of allowable dependent claim 10 and intervening claim 9”. However, in the Non-Final Rejection filed on 10/08/2025, while the Examiner did include claims 10-12 in the allowable subject matter section, the Examiner also stated multiple times in this section that the allowable subject matter in claim 10 was corresponding algorithm regarding the frame rate control unit in claim 10, specifically para. [0107] through [0115] of the applicant’s specification. The frame rate control unit was interpreted under computer-implemented 112(f). As a result, the corresponding algorithms in the specification were read into the “frame rate control unit” in claim 10. See MPEP 2181(II)(B). Since amended claim 1 is no longer being interpreted under 112(f), independent claim 1 has not been amended to incorporate all of the features of allowable dependent claim 10, and the previously noted allowable subject matter (the corresponding algorithm of the frame rate control unit in applicant’s specification) is no longer being read into the claim. Therefore, amended claim 1 no longer has allowable subject matter. Please see the 103 rejection of claim 1, and corresponding independent claims 19 and 20, regarding this matter. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5, 8, 12, 13, and 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over Seyfi et al. (U.S. Patent No. 11480431 B1), hereinafter Seyfi in view of Luo et al. (CN 112235513 A, see English translation for citations), hereinafter Luo, Takaura et al. (EP 2982990 A1, see original document for citations), hereinafter Takaura, and Hirawake et al. (U.S. Publication No. 2018/0080877 A1), hereinafter Hirawake. Regarding claim 1, Seyfi teaches an information processing device (Seyfi teaches a system for drone-assisted sensor mapping which includes control processors which “can process output from the various sensors and control the robotic devices 690” as shown in col. 35, lines 40-45) comprising: a central processing unit (CPU) (Seyfi teaches that “the controller 612 may include a processor or other control circuitry configured to execute instructions of a program that controls operation of a control unit system” in col. 32, line 28-35) configured to: acquire sensing information from a sensor (Seyfi teaches a drone which can include “sensors and control processors that guide movement of the robotic devices” as shown in col. 35, lines 40-45), wherein the sensor includes an image sensor that acquires image information based on light from a lighting device, and the sensing information includes the image information (Seyfi teaches that “the camera 630 may be a video/photographic camera or other type of optical sensing device configured to capture images” in col. 33, lines 48-52. Seyfi additionally teaches a three-dimensional map of the locations of features, including light sources, within an environment in col. 25, lines 18-30. Additionally, based on the map data, which comprises a lighting scenario (see col. 26, lines 33-59), “the drone 102 may send instructions 134 to the control unit 110 to close the blinds 112 and/or to turn on the lights 118a-118c” col. 27, lines 20-24); determine a position of a moving body (Seyfi teaches that the robotic device (moving body) has sensors and control processors attached to the drone in col. 35, lines 40-45. Additionally, Seyfi teaches determining a location of the drone in col. 12, lines 52-57, wherein “the pose of the drone 202 may include a position and orientation of the drone 202. The pose of the drone 202's camera(s) may include one or more positions and orientations of the camera(s)” in col. 15, lines 52-67. See also Luo’s teaching of the self-position below); generate, based on the determined self-position of the moving body, map information that includes three-dimensional information (Seyfi teaches that “map data may be, for example, a three-dimensional map or two-dimensional map. The map data may be, for example, a previously generated environment map that includes indications of the locations and/or types of objects in the property, and/or the locations of light sources (and, optionally, characteristics of the light sources) in the property” in col. 25, lines 18-32); control the lighting device based on at least one of a state of the moving body or the map information (Seyfi teaches a three-dimensional map of the locations of features, including light sources, within an environment in col. 25, lines 18-30. Additionally, based on the map data, which comprises a lighting scenario (see col. 26, lines 33-59), “the drone 102 may send instructions 134 to the control unit 110 to close the blinds 112 and/or to turn on the lights 118a-118c” col. 27, lines 20-24; Note that only one of the state or the map information need be identified here due to the “at least one of” language in the claim. Here, the map data is identified in the context of the above limitation), wherein the state of the moving body includes at least one of a velocity of the moving body or an angular velocity of the moving body (Seyfi teaches “the robotic devices 690 may navigate within the home …one or more accelerometers… that aid in navigation about a space. The robotic devices 690 may include control processors that process output from the various sensors and control the robotic devices 690 to move along a path that reaches the desired destination and avoids obstacles” in col. 35, lines 35-48; here it is inferred that the sensors can sense the velocity of the drones, since the system (sensor) can identify a drone through its speed). Seyfi fails to teach determine, based on the sensing information, a self-position of a moving body, wherein the information processing device is on the moving body; generate map information, wherein the three-dimensional information is associated with at least one regarding a feature point of a plurality of feature points in the map information; control a frame rate of the image sensor based on the at least one of the velocity of the moving body or the angular velocity of the moving body: determine an exposure timing of the image sensor based on the frame rate; and control, based on the determined exposure timing of the image sensor, turn on and turn off of the lighting device. However, Luo teaches determine, based on the sensing information, a self-position of a moving body, wherein the information processing device is on the moving body (Luo teaches “the positioning system based on the visual SLAM carries out positioning according to the image of the surrounding environment of the vehicle, and the image acquisition assembly 110 is arranged on the vehicle, so that the surrounding environment of the vehicle can be conveniently subjected to image acquisition when the vehicle moves” in para. [0068-0069]; SLAM is a positioning technique that carries out an identical process to the process claimed in the claim language regarding generating the self-position estimation) (Luo teaches “the image processing apparatus 100 is provided on a vehicle body 210” in para. [0113]), generate map information, wherein the three-dimensional information is associated with at least one regarding a feature point of a plurality of feature points in the map information (Luo teaches “extracting and tracking feature points of an image shot by a camera, wherein the step may also exist in a basic visual SLAM” in para. [0119]; here, the visual SLAM is interpreted as equivalent to the claimed map information). Seyfi and Luo are both considered to be analogous to the claimed invention because they are in the same field of adjusting lighting conditions through image analysis. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Seyfi to incorporate the teachings of Luo and include to “determine, based on the sensing information, a self-position of a moving body, wherein the information processing device is on the moving body; generate map information, wherein the three-dimensional information is associated with at least one regarding a feature point of a plurality of feature points in the map information”. The motivation for doing so would have been that “the accuracy of judging whether the environment is dark or not can be improved by adopting a feature point extraction and Tracking mode”, as suggested by Luo in para. [0020]. Therefore, it would have been obvious to one of ordinary skill at the time the invention was filed to combine Seyfi with Luo to obtain the invention specified in the above claim limitations. While Seyfi teaches controlling the lighting device on a basis of an exposure timing of the image sensor (Seyfi teaches “some light sources may cause areas of the image to clip or over saturate—at various points in the observation, the drone 302 may step through multiple exposure settings in order to more accurately estimate the brightness of the source without clipping” and “the system or part of the system, e.g. the drone 302 or the control unit 110 as shown in FIGS. 1A-1C, might also recognize artifacts from direct light sources, such as lens flare or blooming and seek to characterize these or model how they change with respect to exposure or pose” in col. 23, lines 5-18; Seyfi then uses these observations to control the lighting unit as shown in col. 26, lines 60-64; here, because the exposure is used to determine specific observations of the control unit, and these observations then impact the modifications made to the lighting scenario, it is concluded that Seyfi teaches controlling the lighting devices on a basis of an exposure timing of the image sensor; see also col. 12, lines 19-35, wherein the integration time can be “ramped down” as a result of entry into a brighter area, and the exposure settings may be adjusted while “approaching the transition zone”…”to prevent the camera from being over exposed”), Seyfi and Luo fail to teach control a frame rate of the image sensor based on the at least one of the velocity of the moving body or the angular velocity of the moving body; determine an exposure timing of the image sensor based on the frame rate; and control, based on the determined exposure timing of the image sensor, turn on and turn off of the lighting device. However, Takaura teaches control a frame rate of the image sensor on a basis of at least one of velocity and angular velocity of the moving body (Takaura teaches “the processor 131 obtains an optimum frame rate…according to the movement velocity of the target object” in para. [0106]; here the frame rate is controlled by the processor in response to the velocity of the moving target). Seyfi, Luo, and Takaura are all considered to be analogous to the claimed invention because they are in the same field of adjusting lighting conditions through image analysis. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Seyfi (as modified by Luo) to incorporate the teachings of Takaura and include to “control a frame rate of the image sensor on a basis of at least one of velocity and angular velocity of the moving body”. The motivation for doing so would have been to ”calculate an optimum frame rate”… “according to the movement velocity of the target object”, as suggested by Takaura in para. [0154]. Additionally, Takaura suggests that, “by using the movement information measuring device 10 whose target object is a recording sheet, a positional shift of an image can be suppressed with high accuracy” in para. [0114]. Therefore, it would have been obvious to one of ordinary skill at the time the invention was filed to combine Seyfi and Luo with Takaura to obtain the invention specified in the above claim limitations. Seyfi, Luo and Takura fail to teach determine an exposure timing of the image sensor based on the frame rate; and control, based on the determined exposure timing of the image sensor, turn on and turn off of the lighting device. However Hirawake teaches determine an exposure timing of the image sensor based on the frame rate (Hirawake teaches "the exposure time set to be variable in a range of 60 msec to 1 msec in accordance with an adjusted frame rate" in para. [0040]); and control, based on the determined exposure timing of the image sensor, turn on and turn off of the lighting device (Hirawake teaches “the control unit 9 controls ON/OFF of the excitation light emitted by the light emitting device 5 and the exposure timing of the imaging element 3 b so that they are synchronized” in para. [0042]). Seyfi, Luo, Takaura, and Hirawake are all considered to be analogous to the claimed invention because they are in the same field of adjusting lighting conditions through image analysis. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Seyfi (as modified by Luo and Takaura) to incorporate the teachings of Hirawake and include to “determine an exposure timing of the image sensor based on the frame rate; and control, based on the determined exposure timing of the image sensor, turn on and turn off of the lighting device”. The motivation for doing so would have been ” to variably set the exposure time in at least the range of 1 msec to 30 msec in order to obtain an optimum fluorescence image under various operating environments”, as suggested by Hirawake in para. [0040]. Therefore, it would have been obvious to one of ordinary skill at the time the invention was filed to combine Seyfi, Luo, and Takaura with Hirawake to obtain the invention specified in claim 1. Regarding claim 2, Seyfi, Luo, Takaura, and Hirawake teach the information processing device according to claim 1, wherein the CPU is further configured to control the lighting device based on a state of the at least one feature point, and the at least one feature point is around the moving body in the map information (Seyfi teaches that the drone can record “record observations about lighting elements, e.g. lighting sources, through indirect observation with its onboard camera(s) by observing the relative brightness of other surfaces in the environment, e.g. the room 310, and mapping how they change with lighting changes or with drone 302's pose or position, the system can measure and record in the environment map” in col. 23, lines 25-33; additionally, Seyfi teaches a three-dimensional map of the locations of features, including light sources, within an environment in col. 25, lines 18-30; based on the map data, which comprises a lighting scenario (see col. 26, lines 33-59) “the drone 102 may send instructions 134 to the control unit 110 to close the blinds 112 and/or to turn on the lights 118a-118c” col. 27, lines 20-24); here, the map taught above can be combined with the feature points of the target features (moving body) taught by Luo in claim 1, wherein the lighting device being controlled based on the state of map information taught by Seyfi, can be taken in combination with the specific feature points of the target features (moving body) taught by Luo). Similar motivations as applied to claim 1 can be applied here. Regarding claim 3, Seyfi, Luo, Takaura, and Hirawake teach the information processing device according to claim 2, wherein the state of the at least one feature point includes at least one of a number of the plurality of feature points (Luo teaches “calculating the tracking number and the tracking frame length of the feature points” in para. [0121]; wherein the number tracking number is interpreted as equivalent to the number of feature points; Note that only one state need be identified here due to the “at least one of” language in the claim) Similar motivations as applied to claim 1 can be applied here, a distance from the moving body to the at least one feature point, and a contrast of the at least one feature point in the map information. ***NOTE: Only one limitation is mapped to here due to the “at least one of” language in the claim. Regarding claim 4, Seyfi, Luo, Takaura, and Hirawake teach an information processing device according to claim 3, wherein the CPU is further configured to control the lighting device based on information that indicates an ease to track the at least one feature point (Luo teaches “the determination component 130 mainly refers to an algorithm library formed by algorithms such as feature tracking quality evaluation, dark scene recognition and the like, and is used for determining whether to start light supplement and adjust light supplement parameters” in para. [0115]). Similar motivations as applied to claim 1 can be applied here. Regarding claim 5, Seyfi, Luo, Takaura, and Hirawake teach the information processing device according to claim 3, wherein the CPU is further configured to: determine at least one valid feature point of the plurality of feature points (Seyfi teaches “the drone 202 might choose to plot a path through a well-lit part of the room” as shown in col. 16, lines 25-38, wherein these well-lit features are interpreted as equivalent to the valid area. See also Luo’s teaching of specific feature points in claim 1 which can be combined with Seyfi’s teaching of the valid feature area to teach the above claim limitation); detect an invalid area based on the image information, wherein the invalid area indicates an area in which the at least one valid feature point is absent (Seyfi teaches that the drone may detect “a dimly lit section where visual features are known to be scarce given the current lighting conditions or where there is little correlation at all between lighting conditions and observed/detected visual features” as shown in col. 16, lines 25-38; this dimly lit area is interpreted as equivalent to the claimed invalid area. See also Luo’s teaching of specific feature points in claim 1 which can be combined with Seyfi’s teaching of the invalid feature area to teach the above claim limitation); and control the lighting device on a further based on a position of the detected invalid area (Seyfi teaches “the drone 202 might choose to plot a path through a well-lit part of the room, e.g. the room 210b, where it has previously seen little variation in detected features due to lighting intensity or source, while avoiding a dimly lit section where visual features are known to be scarce given the current lighting conditions or where there is little correlation at all between lighting conditions and observed/detected visual features” in col. 16, lines 25-38 here, the lighting unit is controlled based on the state of the features surrounding the drone. The features taught here can be combined with the specific feature points as taught by Luo). Similar motivations as applied to claim 1 can be applied here. Regarding claim 8, Seyfi, Luo, Takaura, and Hirawake teach the information processing device according to claim 2, wherein the CPU is further configured to control at least one of an illumination direction of the lighting device or a lighting intensity of the lighting device (Luo teaches “after confirming that the agitator truck is in the dark environment, light supplement parameters such as light supplement or brightness and angle adjustment can be started” in para. [0127]; here, the brightness is interpreted as the lighting intensity, and the angle is interpreted as equivalent to the illumination direction). Similar motivations as applied to claim 1 can be applied here. Regarding claim 12, Seyfi, Luo, Takaura, and Hirawake teach the information processing device according to claim 1, wherein the CPU is further configured to control the frame rate based on at least one of a distance between the moving body and the at least one feature point, the velocity of the moving body (Takaura teaches that “a movement velocity in an arbitrary frame number may be estimated from measurement values of movement velocities in two or more frame numbers antecedent to the arbitrary frame number so as to adjust a frame rate, an exposure time, and light emission power of a light source” in para. [0180]. See also para. [0154]), or the angular velocity of the moving body. Similar motivations as applied to claim 1 can be applied here. ***NOTE: Only one limitation is mapped to here due to the “at least one of” language in the claim. Regarding claim 13, Seyfi, Luo, Takaura, and Hirawake teach the information processing device according to claim 1, Wherein the CPU is further configured to control lighting intensity of the lighting device based on the at least one of the velocity of the moving body or the angular velocity of the moving body (Takaura teaches “a movement velocity in an arbitrary frame number may be estimated from measurement values of movement velocities in two or more frame numbers antecedent to the arbitrary frame number so as to adjust … light emission power of a light source” in para. [0080]). Similar motivations as applied to claim 1 can be applied here. Regarding claim 16, Seyfi, Luo, Takaura, and Hirawake teach the information processing device according to claim 1, wherein the CPU is further configured to: determine the state of the moving body based on the estimated self-position and at least part of the sensing information (Seyfi teaches determining a location of the drone in col. 12, lines 52-57, and col. 15, lines 52-67, while Luo teaches “the positioning system based on the visual SLAM carries out positioning according to the image of the surrounding environment of the vehicle, and the image acquisition assembly 110 is arranged on the vehicle, so that the surrounding environment of the vehicle can be conveniently subjected to image acquisition when the vehicle moves” in para. [0068-0069]; SLAM is a positioning technique that carries out an identical process to the process claimed in the claim language regarding generating the self-position estimation, wherein a state of validity or invalidity is determined based on the sensing information and estimated self-position as shown in para. [0074]); and control the lighting device based on at least one of the determined state of the moving body or the map information (Seyfi teaches a three-dimensional map of the locations of features, including light sources, within an environment in col. 25, lines 18-30. Additionally, based on the map data, which comprises a lighting scenario (see col. 26, lines 33-59), “the drone 102 may send instructions 134 to the control unit 110 to close the blinds 112 and/or to turn on the lights 118a-118c” col. 27, lines 20-24; Note that only one of the state or the map information need be identified here due to the “at least one of” language in the claim). Regarding claim 17, Seyfi, Luo, Takaura, and Hirawake teach the information processing device according to claim 1, wherein the sensor further includes at least one of an acceleration sensor or an angular velocity sensor (Seyfi teaches robotic devices which may include “one or more accelerometers” and “one or more gyroscopes” in col. 35, lines 35-48; it should be noted that a gyroscope measures angular velocity), and the sensing information further includes at least one of the velocity of the moving body or the angular velocity of the moving body (Seyfi teaches “the robotic devices 690 may navigate within the home …one or more accelerometers… that aid in navigation about a space. The robotic devices 690 may include control processors that process output from the various sensors and control the robotic devices 690 to move along a path that reaches the desired destination and avoids obstacles” in col. 35, lines 35-48; here it is inferred that the sensors can sense the velocity of the drones, since the system (sensor) can identify a drone through its speed). Regarding claim 18, Seyfi, Luo, Takaura, and Hirawake teach the information processing device according to claim 1, wherein the state of the moving body further includes at least one of a position of the moving body (Seyfi teaches that “the Seyfi teaches that “the drone 302 and/or control unit 110 may indicate the position, orientation, and/or angle of the drone 302 and/or its onboard camera that should be used or avoided at various points within the environment” in col. 19, lines 2-6; here, the location/position of the drone is interpreted as the position of the moving body claimed in the claim language; Note that only one state need be identified here due to the “at least one of” language in the claim) or a posture of the moving body. ***NOTE: Only one limitation is mapped to here due to the “at least one of” language in the claim. Regarding claim 19, Seyfi teaches information processing method (Seyfi teaches a system for drone-assisted sensor mapping which includes control processors which “can process output from the various sensors and control the robotic devices 690” as shown in col. 35, lines 40-45; see also Abstract) comprising: acquiring sensing information from a sensor (Seyfi teaches a drone which can include “sensors and control processors that guide movement of the robotic devices” as shown in col. 35, lines 40-45), wherein the sensor includes an image sensor that acquires image information based on light from a lighting device, and the sensing information includes the image information (Seyfi teaches that “the camera 630 may be a video/photographic camera or other type of optical sensing device configured to capture images” in col. 33, lines 48-52. Seyfi additionally teaches a three-dimensional map of the locations of features, including light sources, within an environment in col. 25, lines 18-30. Additionally, based on the map data, which comprises a lighting scenario (see col. 26, lines 33-59), “the drone 102 may send instructions 134 to the control unit 110 to close the blinds 112 and/or to turn on the lights 118a-118c” col. 27, lines 20-24); determining a position of a moving body (Seyfi teaches that the robotic device (moving body) has sensors and control processors attached to the drone in col. 35, lines 40-45. Additionally, Seyfi teaches determining a location of the drone in col. 12, lines 52-57, wherein “the pose of the drone 202 may include a position and orientation of the drone 202. The pose of the drone 202's camera(s) may include one or more positions and orientations of the camera(s)” in col. 15, lines 52-67. See also Luo’s teaching of the self-position below); generating, based on the determined self-position of the moving body, map information that includes three-dimensional information (Seyfi teaches that “map data may be, for example, a three-dimensional map or two-dimensional map. The map data may be, for example, a previously generated environment map that includes indications of the locations and/or types of objects in the property, and/or the locations of light sources (and, optionally, characteristics of the light sources) in the property” in col. 25, lines 18-32); controlling the lighting device based on at least one of a state of the moving body or the map information (Seyfi teaches a three-dimensional map of the locations of features, including light sources, within an environment in col. 25, lines 18-30. Additionally, based on the map data, which comprises a lighting scenario (see col. 26, lines 33-59), “the drone 102 may send instructions 134 to the control unit 110 to close the blinds 112 and/or to turn on the lights 118a-118c” col. 27, lines 20-24; Note that only one of the state or the map information need be identified here due to the “at least one of” language in the claim. Here, the map data is identified in the context of the above limitation), wherein the state of the moving body includes at least one of a velocity of the moving body or an angular velocity of the moving body (Seyfi teaches “the robotic devices 690 may navigate within the home …one or more accelerometers… that aid in navigation about a space. The robotic devices 690 may include control processors that process output from the various sensors and control the robotic devices 690 to move along a path that reaches the desired destination and avoids obstacles” in col. 35, lines 35-48; here it is inferred that the sensors can sense the velocity of the drones, since the system (sensor) can identify a drone through its speed). Seyfi fails to teach determining, based on the sensing information, a self-position of a moving body, wherein the information processing device is on the moving body; generating map information, wherein the three-dimensional information is associated with at least one regarding a feature point of a plurality of feature points in the map information; controlling a frame rate of the image sensor based on the at least one of the velocity of the moving body or the angular velocity of the moving body: determine an exposure timing of the image sensor based on the frame rate; and controlling, based on the determined exposure timing of the image sensor, turn on and turn off of the lighting device. However, Luo teaches determining, based on the sensing information, a self-position of a moving body, wherein the information processing device is on the moving body (Luo teaches “the positioning system based on the visual SLAM carries out positioning according to the image of the surrounding environment of the vehicle, and the image acquisition assembly 110 is arranged on the vehicle, so that the surrounding environment of the vehicle can be conveniently subjected to image acquisition when the vehicle moves” in para. [0068-0069]; SLAM is a positioning technique that carries out an identical process to the process claimed in the claim language regarding generating the self-position estimation) (Luo teaches “the image processing apparatus 100 is provided on a vehicle body 210” in para. [0113]), generating map information, wherein the three-dimensional information is associated with at least one regarding a feature point of a plurality of feature points in the map information (Luo teaches “extracting and tracking feature points of an image shot by a camera, wherein the step may also exist in a basic visual SLAM” in para. [0119]; here, the visual SLAM is interpreted as equivalent to the claimed map information). Seyfi and Luo are both considered to be analogous to the claimed invention because they are in the same field of adjusting lighting conditions through image analysis. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Seyfi to incorporate the teachings of Luo and include “determining, based on the sensing information, a self-position of a moving body, wherein the information processing device is on the moving body; generating map information, wherein the three-dimensional information is associated with at least one regarding a feature point of a plurality of feature points in the map information”. The motivation for doing so would have been that “the accuracy of judging whether the environment is dark or not can be improved by adopting a feature point extraction and Tracking mode”, as suggested by Luo in para. [0020]. Therefore, it would have been obvious to one of ordinary skill at the time the invention was filed to combine Seyfi with Luo to obtain the invention specified in the above claim limitations. While Seyfi teaches controlling the lighting device on a basis of an exposure timing of the image sensor (Seyfi teaches “some light sources may cause areas of the image to clip or over saturate—at various points in the observation, the drone 302 may step through multiple exposure settings in order to more accurately estimate the brightness of the source without clipping” and “the system or part of the system, e.g. the drone 302 or the control unit 110 as shown in FIGS. 1A-1C, might also recognize artifacts from direct light sources, such as lens flare or blooming and seek to characterize these or model how they change with respect to exposure or pose” in col. 23, lines 5-18; Seyfi then uses these observations to control the lighting unit as shown in col. 26, lines 60-64; here, because the exposure is used to determine specific observations of the control unit, and these observations then impact the modifications made to the lighting scenario, it is concluded that Seyfi teaches controlling the lighting devices on a basis of an exposure timing of the image sensor; see also col. 12, lines 19-35, wherein the integration time can be “ramped down” as a result of entry into a brighter area, and the exposure settings may be adjusted while “approaching the transition zone”…”to prevent the camera from being over exposed”), Seyfi and Luo fail to teach controlling a frame rate of the image sensor based on the at least one of the velocity of the moving body or the angular velocity of the moving body; determining an exposure timing of the image sensor based on the frame rate; and controlling, based on the determined exposure timing of the image sensor, turn on and turn off of the lighting device. However, Takaura teaches controlling a frame rate of the image sensor on a basis of at least one of velocity and angular velocity of the moving body (Takaura teaches “the processor 131 obtains an optimum frame rate…according to the movement velocity of the target object” in para. [0106]; here the frame rate is controlled by the processor in response to the velocity of the moving target). Seyfi, Luo, and Takaura are all considered to be analogous to the claimed invention because they are in the same field of adjusting lighting conditions through image analysis. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Seyfi (as modified by Luo) to incorporate the teachings of Takaura and include “controlling a frame rate of the image sensor on a basis of at least one of velocity and angular velocity of the moving body”. The motivation for doing so would have been to ”calculate an optimum frame rate”… “according to the movement velocity of the target object”, as suggested by Takaura in para. [0106]. Additionally, Takaura suggests that, “by using the movement information measuring device 10 whose target object is a recording sheet, a positional shift of an image can be suppressed with high accuracy” in para. [0114]. Therefore, it would have been obvious to one of ordinary skill at the time the invention was filed to combine Seyfi and Luo with Takaura to obtain the invention specified in the above claim limitations. Seyfi, Luo and Takura fail to teach determining an exposure timing of the image sensor based on the frame rate; and controlling, based on the determined exposure timing of the image sensor, turn on and turn off of the lighting device. However, Hirawake teaches determining an exposure timing of the image sensor based on the frame rate (Hirawake teaches "the exposure time set to be variable in a range of 60 msec to 1 msec in accordance with an adjusted frame rate" in para. [0040]); and controlling, based on the determined exposure timing of the image sensor, turn on and turn off of the lighting device (Hirawake teaches “the control unit 9 controls ON/OFF of the excitation light emitted by the light emitting device 5 and the exposure timing of the imaging element 3 b so that they are synchronized” in para. [0042]). Seyfi, Luo, Takaura, and Hirawake are all considered to be analogous to the claimed invention because they are in the same field of adjusting lighting conditions through image analysis. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Seyfi (as modified by Luo and Takaura) to incorporate the teachings of Hirawake and include “determining an exposure timing of the image sensor based on the frame rate; and controlling, based on the determined exposure timing of the image sensor, turn on and turn off of the lighting device”. The motivation for doing so would have been ” to variably set the exposure time in at least the range of 1 msec to 30 msec in order to obtain an optimum fluorescence image under various operating environments”, as suggested by Hirawake in para. [0040]. Therefore, it would have been obvious to one of ordinary skill at the time the invention was filed to combine Seyfi, Luo, and Takaura with Hirawake to obtain the invention specified in claim 19. Regarding claim 20, Seyfi teaches a non-transitory computer-readable medium having stored thereon, computer-executable instructions which, when executed by an information processing device, cause the information processing device to execute operations (Seyfi teaches “a processor will receive instructions and data from a read-only memory and/or a random access memory” in col. 42, lines 56-58), the operations comprising: acquiring sensing information from a sensor (Seyfi teaches a drone which can include “sensors and control processors that guide movement of the robotic devices” as shown in col. 35, lines 40-45), wherein the sensor includes an image sensor that acquires image information based on light from a lighting device, and the sensing information includes the image information (Seyfi teaches that “the camera 630 may be a video/photographic camera or other type of optical sensing device configured to capture images” in col. 33, lines 48-52. Seyfi additionally teaches a three-dimensional map of the locations of features, including light sources, within an environment in col. 25, lines 18-30. Additionally, based on the map data, which comprises a lighting scenario (see col. 26, lines 33-59), “the drone 102 may send instructions 134 to the control unit 110 to close the blinds 112 and/or to turn on the lights 118a-118c” col. 27, lines 20-24); determining a position of a moving body (Seyfi teaches that the robotic device (moving body) has sensors and control processors attached to the drone in col. 35, lines 40-45. Additionally, Seyfi teaches determining a location of the drone in col. 12, lines 52-57, wherein “the pose of the drone 202 may include a position and orientation of the drone 202. The pose of the drone 202's camera(s) may include one or more positions and orientations of the camera(s)” in col. 15, lines 52-67. See also Luo’s teaching of the self-position below); generating, based on the determined self-position of the moving body, map information that includes three-dimensional information (Seyfi teaches that “map data may be, for example, a three-dimensional map or two-dimensional map. The map data may be, for example, a previously generated environment map that includes indications of the locations and/or types of objects in the property, and/or the locations of light sources (and, optionally, characteristics of the light sources) in the property” in col. 25, lines 18-32); controlling the lighting device based on at least one of a state of the moving body or the map information (Seyfi teaches a three-dimensional map of the locations of features, including light sources, within an environment in col. 25, lines 18-30. Additionally, based on the map data, which comprises a lighting scenario (see col. 26, lines 33-59), “the drone 102 may send instructions 134 to the control unit 110 to close the blinds 112 and/or to turn on the lights 118a-118c” col. 27, lines 20-24; Note that only one of the state or the map information need be identified here due to the “at least one of” language in the claim. Here, the map data is identified in the context of the above limitation), wherein the state of the moving body includes at least one of a velocity of the moving body or an angular velocity of the moving body (Seyfi teaches “the robotic devices 690 may navigate within the home …one or more accelerometers… that aid in navigation about a space. The robotic devices 690 may include control processors that process output from the various sensors and control the robotic devices 690 to move along a path that reaches the desired destination and avoids obstacles” in col. 35, lines 35-48; here it is inferred that the sensors can sense the velocity of the drones, since the system (sensor) can identify a drone through its speed). Seyfi fails to teach determining, based on the sensing information, a self-position of a moving body, wherein the information processing device is on the moving body; generating map information, wherein the three-dimensional information is associated with at least one regarding a feature point of a plurality of feature points in the map information; controlling a frame rate of the image sensor based on the at least one of the velocity of the moving body or the angular velocity of the moving body: determine an exposure timing of the image sensor based on the frame rate; and controlling, based on the determined exposure timing of the image sensor, turn on and turn off of the lighting device. However, Luo teaches determining, based on the sensing information, a self-position of a moving body, wherein the information processing device is on the moving body (Luo teaches “the positioning system based on the visual SLAM carries out positioning according to the image of the surrounding environment of the vehicle, and the image acquisition assembly 110 is arranged on the vehicle, so that the surrounding environment of the vehicle can be conveniently subjected to image acquisition when the vehicle moves” in para. [0068-0069]; SLAM is a positioning technique that carries out an identical process to the process claimed in the claim language regarding generating the self-position estimation) (Luo teaches “the image processing apparatus 100 is provided on a vehicle body 210” in para. [0113]), generating map information, wherein the three-dimensional information is associated with at least one regarding a feature point of a plurality of feature points in the map information (Luo teaches “extracting and tracking feature points of an image shot by a camera, wherein the step may also exist in a basic visual SLAM” in para. [0119]; here, the visual SLAM is interpreted as equivalent to the claimed map information). Seyfi and Luo are both considered to be analogous to the claimed invention because they are in the same field of adjusting lighting conditions through image analysis. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Seyfi to incorporate the teachings of Luo and include “determining, based on the sensing information, a self-position of a moving body, wherein the information processing device is on the moving body; generating map information, wherein the three-dimensional information is associated with at least one regarding a feature point of a plurality of feature points in the map information”. The motivation for doing so would have been that “the accuracy of judging whether the environment is dark or not can be improved by adopting a feature point extraction and Tracking mode”, as suggested by Luo in para. [0020]. Therefore, it would have been obvious to one of ordinary skill at the time the invention was filed to combine Seyfi with Luo to obtain the invention specified in the above claim limitations. While Seyfi teaches controlling the lighting device on a basis of an exposure timing of the image sensor (Seyfi teaches “some light sources may cause areas of the image to clip or over saturate—at various points in the observation, the drone 302 may step through multiple exposure settings in order to more accurately estimate the brightness of the source without clipping” and “the system or part of the system, e.g. the drone 302 or the control unit 110 as shown in FIGS. 1A-1C, might also recognize artifacts from direct light sources, such as lens flare or blooming and seek to characterize these or model how they change with respect to exposure or pose” in col. 23, lines 5-18; Seyfi then uses these observations to control the lighting unit as shown in col. 26, lines 60-64; here, because the exposure is used to determine specific observations of the control unit, and these observations then impact the modifications made to the lighting scenario, it is concluded that Seyfi teaches controlling the lighting devices on a basis of an exposure timing of the image sensor; see also col. 12, lines 19-35, wherein the integration time can be “ramped down” as a result of entry into a brighter area, and the exposure settings may be adjusted while “approaching the transition zone”…”to prevent the camera from being over exposed”), Seyfi and Luo fail to teach controlling a frame rate of the image sensor based on the at least one of the velocity of the moving body or the angular velocity of the moving body; determining an exposure timing of the image sensor based on the frame rate; and controlling, based on the determined exposure timing of the image sensor, turn on and turn off of the lighting device. However, Takaura teaches controlling a frame rate of the image sensor on a basis of at least one of velocity and angular velocity of the moving body (Takaura teaches “the processor 131 obtains an optimum frame rate…according to the movement velocity of the target object” in para. [0106]; here the frame rate is controlled by the processor in response to the velocity of the moving target). Seyfi, Luo, and Takaura are all considered to be analogous to the claimed invention because they are in the same field of adjusting lighting conditions through image analysis. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Seyfi (as modified by Luo) to incorporate the teachings of Takaura and include “controlling a frame rate of the image sensor on a basis of at least one of velocity and angular velocity of the moving body”. The motivation for doing so would have been to ”calculate an optimum frame rate”… “according to the movement velocity of the target object”, as suggested by Takaura in para. [0106]. Additionally, Takaura suggests that, “by using the movement information measuring device 10 whose target object is a recording sheet, a positional shift of an image can be suppressed with high accuracy” in para. [0114]. Therefore, it would have been obvious to one of ordinary skill at the time the invention was filed to combine Seyfi and Luo with Takaura to obtain the invention specified in the above claim limitations. Seyfi, Luo and Takura fail to teach determining an exposure timing of the image sensor based on the frame rate; and controlling, based on the determined exposure timing of the image sensor, turn on and turn off of the lighting device. However, Hirawake teaches determining an exposure timing of the image sensor based on the frame rate (Hirawake teaches "the exposure time set to be variable in a range of 60 msec to 1 msec in accordance with an adjusted frame rate" in para. [0040]); and controlling, based on the determined exposure timing of the image sensor, turn on and turn off of the lighting device (Hirawake teaches “the control unit 9 controls ON/OFF of the excitation light emitted by the light emitting device 5 and the exposure timing of the imaging element 3 b so that they are synchronized” in para. [0042]). Seyfi, Luo, Takaura, and Hirawake are all considered to be analogous to the claimed invention because they are in the same field of adjusting lighting conditions through image analysis. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Seyfi (as modified by Luo and Takaura) to incorporate the teachings of Hirawake and include “determining an exposure timing of the image sensor based on the frame rate; and controlling, based on the determined exposure timing of the image sensor, turn on and turn off of the lighting device”. The motivation for doing so would have been ” to variably set the exposure time in at least the range of 1 msec to 30 msec in order to obtain an optimum fluorescence image under various operating environments”, as suggested by Hirawake in para. [0040]. Therefore, it would have been obvious to one of ordinary skill at the time the invention was filed to combine Seyfi, Luo, and Takaura with Hirawake to obtain the invention specified in claim 20. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Seyfi et al. (U.S. Patent No. 11480431 B1), hereinafter Seyfi in view of Luo et al. (CN 112235513 A, see English translation for citations), hereinafter Luo, Takaura et al. (EP 2982990 A1, see original document for citations), hereinafter Takaura, Hirawake et al. (U.S. Publication No. 2018/0080877 A1), hereinafter Hirawake, and Trim et al. (U.S. Patent No. 10455669 B1), hereinafter Trim. Regarding claim 7, Seyfi, Luo, Takaura, and Hirawake teach the information processing device according to claim 5. Seyfi, Luo, Takaura, and Hirawake fail to teach wherein the CPU is further configured to receive a control signal corresponding to an instruction from a user, and the control signal indicates the position of the invalid area. However, Trim teaches wherein the CPU is further configured to receive a control signal corresponding to an instruction from a user, and the control signal indicates the position of the invalid area (Trim teaches a “lighting device 62A [which] increases its light output based on instructions received at controller 63A from the lighting control module 86, allowing the user to “enable a user to change a level of brightness in one or more localized areas of a physical location” as shown in col. 12, lines 35-50, here, the localized area of the physical location is equivalent to the position of the invalid area. Additionally, Trim’s teaching can be combined with Seyfi’s teaching of the invalid area in claim 5). Seyfi, Luo, Takaura, Hirawake, and Trim are all considered to be analogous to the claimed invention because they are in the same field of adjusting lighting conditions through image analysis. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Seyfi (as modified by Luo, Takaura, and Hirawake) to incorporate the teachings of Trim and include “wherein the CPU is further configured to receive a control signal corresponding to an instruction from a user, and the control signal indicates the position of the invalid area”. The motivation for doing so would have been to “provide improvements to the function of mobile devices by enabling the mobile devices to dynamically adjust lighting at a location to provide a desired illumination level for a select area within the location”, as suggested by Trim in col. 2, lines 59-64. Therefore, it would have been obvious to one of ordinary skill at the time the invention was filed to combine Seyfi, Luo, Takaura, and Hirawake with Trim to obtain the invention specified in claim 7. Claims 11, 14, and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Seyfi et al. (US 11480431 B1), hereinafter Seyfi in view of Luo et al. (CN 112235513 A, see English translation for citations), hereinafter Luo, Takaura et al. (EP 2982990 A1, see original document for citations), hereinafter Takaura, Hirawake et al. (U.S. Publication No. 2018/0080877 A1), hereinafter Hirawake, and Alakarhu (U.S. Publication No. 2020/0349380 A1). Regarding claim 11, the information processing device according to claim 1. While Seyfi teaches controlling the on and off of the lighting device as shown in claim 1, Seyfi, Luo, Takaura, and Hirawake fail to teach wherein the CPU is further configured to: turn on the lighting device in an exposure period of the image sensor; and turn off the lighting device outside the exposure period of the image sensor. However, Alakarhu teaches wherein the CPU is further configured to: turn on the lighting device in an exposure period of the image sensor; and turn off the lighting device outside the exposure period of the image sensor (Alakarhu teaches “a[] LED pulse and camera exposure time are aligned to capture numerous images with varying configuration settings” in para. [0038]; here, the micro-controller determines the necessary light source level, and adjusts the exposure time accordingly as shown in para. [0038]; Alakarhu further teaches “the camera apparatus 301 relies on the light emitting apparatus 400 to provide a pulse of infrared light at the moment of, or just immediately prior to, the shutter 302 on the camera apparatus 301 opening” and “ the camera assembly may operate a pre-defined sequence of configuration settings at pre-defined intervals” in para. [0037]; here, because the camera and the light emitting apparatus can be synched, and the micro-controller controls the camera, the lighting, and the exposure time, it can be inferred that the lighting control unit (micro-controller) turns on the lighting device during the exposure period and turns it off at the end of the exposure period). Seyfi, Luo, Takaura, Hirawake, and Alakarhu are all considered to be analogous to the claimed invention because they are in the same field of adjusting lighting conditions through image analysis. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Seyfi (as modified by Luo, Takaura, and Hirawake) to incorporate the teachings of Alakarhu and include “wherein the CPU is further configured to: turn on the lighting device in an exposure period of the image sensor; and turn off the lighting device outside the exposure period of the image sensor”. The motivation for doing so would have been that “light source 306 (or light emitting apparatus 400) provides functionality to the overall system because it provides the illumination pattern for improving image capture quality”, as suggested by Alakarhu in para. [0046]. Therefore, it would have been obvious to one of ordinary skill at the time the invention was filed to combine Seyfi, Luo, Takaura, and Hirawake with Alakarhu to obtain the invention specified in claim 11. Regarding claim 14, Seyfi, Luo, Takaura, and Hirawake teach the information processing device according to claim 13. While Seyfi teaches controlling the on and off of the lighting device as shown in claim 1, Seyfi, Luo, Takaura, and Hirawake fail to teach wherein the CPU is further configured to: compare the at least one of the velocity of the moving body or the angular velocity of the moving body with a specific threshold; and control the turn on and the turn off of the lighting device based on a result of the comparison of the at least one of the velocity or the angular velocity with the specific threshold. However, Alakarhu teaches wherein the CPU is further configured to: compare the at least one of the velocity of the moving body or the angular velocity of the moving body with a specific threshold (Alakarhu teaches “outputting a low value for the illumination command when the relative speed is below a threshold speed” in para. [0010]); and control the turn on and the turn off of the lighting device based on a result of the comparison of the at least one of the velocity or the angular velocity with the specific threshold (Alakarhu teaches “outputting a low value for the illumination command when the relative speed is below a threshold speed” in para. [0010] and adjusting the light emitting apparatus based on an illumination command in which “illumination commands may include any denotation of varying illumination levels” in para. [0047-0048], wherein it is inherent these levels may include both off and on. See also Seyfi’s teaching of turning the lighting device off and on in claim 1). Seyfi, Luo, Takaura, Hirawake, and Alakarhu are all considered to be analogous to the claimed invention because they are in the same field of adjusting lighting conditions through image analysis. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Seyfi (as modified by Luo, Takaura, and Hirawake) to incorporate the teachings of Alakarhu and include “wherein the CPU is further configured to: compare the at least one of the velocity of the moving body or the angular velocity of the moving body with a specific threshold; and control the turn on and the turn off of the lighting device based on a result of the comparison of the at least one of the velocity or the angular velocity with the specific threshold”. The motivation for doing so would have been that “light source 306 (or light emitting apparatus 400) provides functionality to the overall system because it provides the illumination pattern for improving image capture quality”, as suggested by Alakarhu in para. [0046]. Therefore, it would have been obvious to one of ordinary skill at the time the invention was filed to combine Seyfi, Luo, Takaura, and Hirawake with Alakarhu to obtain the invention specified in claim 14. Regarding claim 15, Seyfi, Luo, Takaura, and Hirawake teach the information processing device according to claim 13. While Seyfi teaches controlling the on and off of the lighting device as shown in claim 1, Seyfi, Luo, Takaura, and Hirawake fail to teach wherein the CPU is further configured to: control a length of an exposure period of the image sensor based on the lighting intensity of the lighting device; turn on the lighting device during the exposure period of the image sensor; and turn off the lighting device outside the exposure period of the image sensor. However, Alakarhu teaches wherein the CPU is further configured to: control a length of an exposure period of the image sensor based on the lighting intensity of the lighting device; turn on the lighting device during the exposure period of the image sensor; and turn off the lighting device outside the exposure period of the image sensor (Alakarhu teaches “a[] LED pulse and camera exposure time are aligned to capture numerous images with varying configuration settings” in para. [0038]; here, the micro-controller determines the necessary light source level, and adjusts the exposure time accordingly as shown in para. [0038]; Alakarhu further teaches “the camera apparatus 301 relies on the light emitting apparatus 400 to provide a pulse of infrared light at the moment of, or just immediately prior to, the shutter 302 on the camera apparatus 301 opening” and “ the camera assembly may operate a pre-defined sequence of configuration settings at pre-defined intervals” in para. [0037]; here, because the camera and the light emitting apparatus can be synched, and the micro-controller controls the camera, the lighting, and the exposure time, it can be inferred that the lighting control unit (micro-controller) turns on the lighting device during the exposure period and turns it off at the end of the exposure period). Seyfi, Luo, Takaura, Hirawake, and Alakarhu are all considered to be analogous to the claimed invention because they are in the same field of adjusting lighting conditions through image analysis. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Seyfi (as modified by Luo, Takaura, and Hirawake) to incorporate the teachings of Alakarhu and include “wherein the CPU is further configured to: control a length of an exposure period of the image sensor based on the lighting intensity of the lighting device; turn on the lighting device during the exposure period of the image sensor; and turn off the lighting device outside the exposure period of the image sensor”. The motivation for doing so would have been that “light source 306 (or light emitting apparatus 400) provides functionality to the overall system because it provides the illumination pattern for improving image capture quality”, as suggested by Alakarhu in para. [0046]. Therefore, it would have been obvious to one of ordinary skill at the time the invention was filed to combine Seyfi, Luo, Takaura, and Hirawake with Alakarhu to obtain the invention specified in claim 15. Conclusion THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Contact Any inquiry concerning this communication or earlier communications from the examiner should be directed to KYLA G ALLEN whose telephone number is (703)756-5315. The examiner can normally be reached M-F 7:30am - 4:30pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John Villecco can be reached on (571) 272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Kyla Guan-Ping Tiao Allen/ Examiner, Art Unit 2661 /JOHN VILLECCO/Supervisory Patent Examiner, Art Unit 2661
Read full office action

Prosecution Timeline

Sep 07, 2023
Application Filed
Oct 06, 2025
Non-Final Rejection — §103
Jan 08, 2026
Response Filed
Jan 30, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597119
OPERATING METHOD OF ELECTRONIC DEVICE INCLUDING PROCESSOR EXECUTING SEMICONDUCTOR LAYOUT SIMULATION MODULE BASED ON MACHINE LEARNING
2y 5m to grant Granted Apr 07, 2026
Patent 12588594
SYSTEM AND METHOD FOR IDENTIFYING LENGTHS OF PARTICLES
2y 5m to grant Granted Mar 31, 2026
Patent 12591963
SYSTEM AND METHOD FOR ENHANCING DEFECT DETECTION IN OPTICAL CHARACTERIZATION SYSTEMS USING A DIGITAL FILTER
2y 5m to grant Granted Mar 31, 2026
Patent 12548152
INTRACRANIAL ARTERY STENOSIS DETECTION METHOD AND SYSTEM
2y 5m to grant Granted Feb 10, 2026
Patent 12541833
ASSESSING IMAGE/VIDEO QUALITY USING AN ONLINE MODEL TO APPROXIMATE SUBJECTIVE QUALITY VALUES
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
89%
Grant Probability
99%
With Interview (+17.1%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 53 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month