Prosecution Insights
Last updated: April 19, 2026
Application No. 17/737,975

OPTICAL NAVIGATION DEVICE WHICH CAN DETECT AND RECORD ABNORMAL REGION

Non-Final OA §103
Filed
May 05, 2022
Examiner
PEDERSEN, DAVID RUBEN
Art Unit
3667
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Pixart Imaging Inc.
OA Round
5 (Non-Final)
54%
Grant Probability
Moderate
5-6
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 54% of resolved cases
54%
Career Allow Rate
55 granted / 101 resolved
+2.5% vs TC avg
Strong +53% interview lift
Without
With
+52.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
34 currently pending
Career history
135
Total Applications
across all art units

Statute-Specific Performance

§101
15.3%
-24.7% vs TC avg
§103
58.6%
+18.6% vs TC avg
§102
10.8%
-29.2% vs TC avg
§112
12.7%
-27.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 101 resolved cases

Office Action

§103
DETAILED ACTION Claims 1-5, 7-12 are currently pending and have been examined in this application. Claims 6, 13-18 have been Cancelled. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is in response to the “request for continued examination” filed 12/07/2025. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “processing circuit” in claim 1 and repeated throughout the claims. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. As such, “processing circuit” will be interpreted as any “hardware (e.g. a device or a circuit) or hardware with software (e.g. a program installed to a processor)” (Spec Para 0014) capable of performing the claimed functions. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 3-5, 7-12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ebrahimi Afrouzi (US20220066456) in view of Tiwari (US20190235511). Claim 1: Ebrahimi Afrouzi explicitly teaches: An automatic sweeper, provided on a surface, comprising: at least one light source, configured to generate light; (Ebrahimi Afrouzi) – “Some embodiments may provide a robot including communication, mobility, actuation, and processing elements. In some embodiments, the robot may include, but is not limited to include, one or more of a casing, a chassis including a set of wheels, a motor to drive the wheels, a receiver that acquires signals transmitted from, for example, a transmitting beacon, a transmitter for transmitting signals, a processor, a memory storing instructions that when executed by the processor effectuates robotic operations, a controller, a plurality of sensors (e.g., tactile sensor, obstacle sensor, temperature sensor, imaging sensor, light detection and ranging (LIDAR) sensor, camera, depth sensor, time-of-flight (TOF) sensor, TSSP sensor, optical tracking sensor, sonar sensor, ultrasound sensor, laser sensor, light emitting diode (LED) sensor, etc.)… The processor may, for example, receive and process data from internal or external sensors, execute commands based on data received, control motors such as wheel motors, map the environment, localize the robot, determine division of the environment into zones, and determine movement paths.” (Para 0238) “The control system may determine normal kinematic driving, online navigation (i.e., real time navigation), and robust navigation (i.e., navigation in high obstacle density areas).” (Para 0244) “Some aspects include a method for operating a robot, including: capturing, by at least one image sensor disposed on the robot, images of a workspace; obtaining, by a processor of the robot, the captured images; capturing, by a wheel encoder of the robot, movement data indicative of movement of the robot; capturing, by a LIDAR disposed on the robot, LIDAR data as the robot performs work within the workspace, wherein the LIDAR data is indicative of distances from the LIDAR to objects and perimeters immediately surrounding the robot; comparing, by the processor of the robot, at least one object from the captured images to objects in an object dictionary; identifying, by the processor of the robot, a class to which the at least one object belongs; executing, by the robot, a cleaning function and a navigation function, wherein the cleaning function comprises actuating a motor to control at least one of a main brush, a side brush, a fan, and a mop” (Para 0006) a first image sensor, configured to generate first sensing images according to reflected light of the light, wherein a first angle exists between a sensing direction of a sensing surface of the first image sensor and the surface, wherein the first angle is a smallest angle among angles exist between the sensing direction of the first image sensor and the surface; and (Ebrahimi Afrouzi) – “In some embodiments, the robot may use a LIDAR (e.g., 360 degrees LIDAR) to measure distances to objects along a two dimensional plane. For example, a robot may use a LIDAR to measure distances to objects within an environment along a 360 degrees plane. In some embodiments, the robot may use a two-and-a-half dimensional LIDAR. For example, the two-and-a-half dimensional LIDAR may measure distances along multiple planes at different heights corresponding with the total height of illumination provided by the LIDAR.” (Para 0585) “In some embodiments, at least two cameras and a structured light source may be used in reconstructing objects in three dimensions. The light source may emit a structured light pattern onto objects within the environment and the cameras may capture images of the light patterns projected onto objects. In embodiments, the light pattern in images captured by each camera may be different and the processor may use the difference in the light patterns to construct objects in three dimensions.” (Para 0506) “In embodiments, the camera of the robot (e.g., depth camera or other camera) may be positioned in any area of the robot and in various orientations. For example, sensors may be positioned on a back, a front, a side, a bottom, and/or a top of the robot. Also, sensors may be oriented upwards, downwards, sideways, and/or in any specified angle.” (Para 0591) “Some embodiments may include a light source, such as laser, positioned at an angle with respect to a horizontal plane and a camera. The light source may emit a light onto surfaces of objects within the environment and the camera may capture images of the light source projected onto the surfaces of objects.” (Para 0599) “In some embodiments, the robot comprises two lasers with different or same shape positioned at different angles. For example, the robot may include a camera, a first laser and a second laser, each laser positioned at a different angle… In some embodiments, the processor determines a distance of the object on which the laser lines are projected based on a position of the laser lines relative to an edge of the image. In embodiments, the wavelength of light emitted from one or more lasers may be the same or different…a similar result may be captured using two cameras positioned at two different angles and a single laser.” (Para 0603) Examiner Note: A sensor with a specific orientation will have an angle equivalent to the “first angle” which simply “exists” and is not actively created. a second image sensor, configured to generate second sensing images according to the reflected light, wherein a second angle exists between a sensing direction of a sensing surface of the second image sensor and the surface, wherein the second angle is a smallest angle among angles exist between the sensing direction of the second image sensor and the surface, (Ebrahimi Afrouzi) – “In some embodiments, the robot may use a LIDAR (e.g., 360 degrees LIDAR) to measure distances to objects along a two dimensional plane. For example, a robot may use a LIDAR to measure distances to objects within an environment along a 360 degrees plane. In some embodiments, the robot may use a two-and-a-half dimensional LIDAR. For example, the two-and-a-half dimensional LIDAR may measure distances along multiple planes at different heights corresponding with the total height of illumination provided by the LIDAR.” (Para 0585) “In some embodiments, at least two cameras and a structured light source may be used in reconstructing objects in three dimensions. The light source may emit a structured light pattern onto objects within the environment and the cameras may capture images of the light patterns projected onto objects. In embodiments, the light pattern in images captured by each camera may be different and the processor may use the difference in the light patterns to construct objects in three dimensions.” (Para 0506) “In embodiments, the camera of the robot (e.g., depth camera or other camera) may be positioned in any area of the robot and in various orientations. For example, sensors may be positioned on a back, a front, a side, a bottom, and/or a top of the robot. Also, sensors may be oriented upwards, downwards, sideways, and/or in any specified angle.” (Para 0591) “Some embodiments may include a light source, such as laser, positioned at an angle with respect to a horizontal plane and a camera. The light source may emit a light onto surfaces of objects within the environment and the camera may capture images of the light source projected onto the surfaces of objects.” (Para 0599) “In some embodiments, the robot comprises two lasers with different or same shape positioned at different angles. For example, the robot may include a camera, a first laser and a second laser, each laser positioned at a different angle… In some embodiments, the processor determines a distance of the object on which the laser lines are projected based on a position of the laser lines relative to an edge of the image. In embodiments, the wavelength of light emitted from one or more lasers may be the same or different…a similar result may be captured using two cameras positioned at two different angles and a single laser.” (Para 0603) Examiner Note: A sensor with a specific orientation will have an angle equivalent to the “second angle” which simply “exists” and is not actively created. wherein the first angle is larger than the second angle; (Ebrahimi Afrouzi) – “a similar result may be captured using two cameras positioned at two different angles and a single laser.” (Para 0603) “In embodiments, the camera of the robot (e.g., depth camera or other camera) may be positioned in any area of the robot and in various orientations. For example, sensors may be positioned on a back, a front, a side, a bottom, and/or a top of the robot. Also, sensors may be oriented upwards, downwards, sideways, and/or in any specified angle.” (Para 0591) “In some embodiments, at least two cameras and a structured light source may be used in reconstructing objects in three dimensions. The light source may emit a structured light pattern onto objects within the environment and the cameras may capture images of the light patterns projected onto objects. In embodiments, the light pattern in images captured by each camera may be different and the processor may use the difference in the light patterns to construct objects in three dimensions.” (Para 0506) Examiner Note: Given that the angles of the cameras are different, one will be larger than the other. a processing circuit, (Ebrahimi Afrouzi) – “FIG. 181C illustrates a map 18102 displayed to a user, including problematic area 18103, robot 18100, and notification 18104 that the user may use to choose for the robot 18100 to avoid area 18103 next time or edit the area 18103.” (Para 1133) “In some embodiments, the robot includes a touch-sensitive display or otherwise a touch screen. In some embodiments, the touch screen may include a separate MCU or CPU for the user interface may share the main MCU or CPU of the robot. In some embodiments, the touch screen may include an ARM Cortex M0 processor with one or more computer-readable storage mediums, a memory controller, one or more processing units, a peripherals interface, Radio Frequency (RF) circuitry, audio circuitry, a speaker, a microphone, an Input/Output (I/O) subsystem, other input control devices, and one or more external ports.” (Para 0735) “Some embodiments may provide a robot including communication, mobility, actuation, and processing elements. In some embodiments, the robot may include, but is not limited to include, one or more of a casing, a chassis including a set of wheels, a motor to drive the wheels, a receiver that acquires signals transmitted from, for example, a transmitting beacon, a transmitter for transmitting signals, a processor, a memory storing instructions that when executed by the processor effectuates robotic operations, a controller, a plurality of sensors (e.g., tactile sensor, obstacle sensor, temperature sensor, imaging sensor, light detection and ranging (LIDAR) sensor, camera, depth sensor, time-of-flight (TOF) sensor, TSSP sensor, optical tracking sensor, sonar sensor, ultrasound sensor, laser sensor, light emitting diode (LED) sensor, etc.)… The processor may, for example, receive and process data from internal or external sensors, execute commands based on data received, control motors such as wheel motors, map the environment, localize the robot, determine division of the environment into zones, and determine movement paths.” (Para 0238) Examiner Note: Per BRI, abnormal region may correspond with any surface, obstacle, or object which is different from the surrounding area. Ebrahimi Afouzi does not explicitly teach: configured to determine a protruding level of an object on [[a]]the surface according to a light region of the first sensing images or the second sensing images, and to determine the abnormal region according to the protruding level, wherein the light region is formed by the reflected light; wherein the processing circuit determines the protruding level according to the second sensing images, and then determines the abnormal region according to the first sensing images if the protruding level is larger than a protruding threshold; wherein the automatic sweeper stops and generates a warning message and waits for a user to process, if the processing circuit determines that the abnormal region exists. However, Tiwari, in the same field of endeavor of abnormal region detection, teaches: configured to determine a protruding level of an object on [[a]]the surface according to a light region of the first sensing images or the second sensing images, and to determine the abnormal region according to the protruding level, wherein the light region is formed by the reflected light; (Tiwari) – “In particular, after isolating the region of the depth map that corresponds to the region of interest in the thermal image, the robotic system can scan the region of the depth map for a height gradient, which may indicate presence of a solid object on the area of the floor adjacent the robotic system. In one implementation, the robotic system can execute edge detection, blob detection, and/or other computer vision techniques to delineate a height gradient(s) in the region of the depth map.…In response to detecting height gradients and/or surfaces within this area of interest within the depth map, the robotic system can identify presence of a solid object (e.g., a box or a grape) offset above the ground floor.” (Para 0062) “Block S110 of the method S100 recites recording a thermal image of an area of a floor of the store; and Block S112 of the method S100 recites detecting a thermal gradient in the thermal image…The robotic system (or the remote computer system) can then interpret such thermal gradients as either solid objects (boxes, cans, bottles, grapes, apples) or amorphous objects (e.g., fluids, liquids) based on additional color and/or height data collected through the color camera(s) and depth sensor(s) integrated into the robotic system, as described above.” (Para 0046) “Throughout operation, the robotic system can regularly record a depth map (e.g., LIDAR) by sampling the depth sensor, such as at a rate of 10 Hz. Based on the depth map, the robotic system can determine its location (i.e., “localize itself”) within the store, such as described above. The robotic system can then: access a known location and orientation of the thermographic camera and a location and orientation of the depth sensor; access a lookup table and/or a parameterized model for projecting pixels of the depth map onto the thermal image; link or map each pixel within the region of interest of the thermal image to a corresponding pixel in the depth map according to the parameterized model and/or lookup table; and identify a region of the depth map that corresponds to the region of interest in the thermal image. The robotic system can also: project a floor plan of the store onto the depth map to isolate a segment of the depth map representing the floor of the store and excluding a fixed display near the area of the floor in the store; project a ground plane onto the segment of the depth map; and then scan the segment of the depth map for an object offset above the ground plane.” (Para 0061) Examiner Note: Depth map is generated using lidar which, by nature, utilizes reflected light. Therefore, per BRI, depth map corresponds with light region. wherein the processing circuit determines the protruding level according to the second sensing images, and (Tiwari) – “In particular, after isolating the region of the depth map that corresponds to the region of interest in the thermal image, the robotic system can scan the region of the depth map for a height gradient, which may indicate presence of a solid object on the area of the floor adjacent the robotic system. In one implementation, the robotic system can execute edge detection, blob detection, and/or other computer vision techniques to delineate a height gradient(s) in the region of the depth map.…In response to detecting height gradients and/or surfaces within this area of interest within the depth map, the robotic system can identify presence of a solid object (e.g., a box or a grape) offset above the ground floor.” (Para 0062) “Block S110 of the method S100 recites recording a thermal image of an area of a floor of the store; and Block S112 of the method S100 recites detecting a thermal gradient in the thermal image…The robotic system (or the remote computer system) can then interpret such thermal gradients as either solid objects (boxes, cans, bottles, grapes, apples) or amorphous objects (e.g., fluids, liquids) based on additional color and/or height data collected through the color camera(s) and depth sensor(s) integrated into the robotic system, as described above.” (Para 0046) then determines the abnormal region according to the first sensing images if the protruding level is larger than a protruding threshold; (Tiwari) – “In particular, after isolating the region of the depth map that corresponds to the region of interest in the thermal image, the robotic system can scan the region of the depth map for a height gradient, which may indicate presence of a solid object on the area of the floor adjacent the robotic system. In one implementation, the robotic system can execute edge detection, blob detection, and/or other computer vision techniques to delineate a height gradient(s) in the region of the depth map.…In response to detecting height gradients and/or surfaces within this area of interest within the depth map, the robotic system can identify presence of a solid object (e.g., a box or a grape) offset above the ground floor.” (Para 0062) “Block S110 of the method S100 recites recording a thermal image of an area of a floor of the store; and Block S112 of the method S100 recites detecting a thermal gradient in the thermal image…The robotic system (or the remote computer system) can then interpret such thermal gradients as either solid objects (boxes, cans, bottles, grapes, apples) or amorphous objects (e.g., fluids, liquids) based on additional color and/or height data collected through the color camera(s) and depth sensor(s) integrated into the robotic system, as described above.” (Para 0046) “For example, during operation, the robotic system can: autonomously navigate toward an area of the floor of the store in Block S102; record a thermal image of the area of the floor of the store in Block S110; record a depth map of the area of the floor in Block S120; detect a thermal gradient in the thermal image in Block S112; and scan a region of the depth map, corresponding to the thermal gradient detected in the thermal image, for a height gradient greater than a minimum height threshold in Block S122. In response to detecting a thermal gradient in the thermal image and in response to detecting a height gradient—greater than the minimum height threshold (e.g., one centimeter) and less than a maximum height threshold (e.g., fifteen centimeters)—in the corresponding region of the depth map, the robotic system can: predict presence of a small hazardous object (e.g., a grape, a can, a banana) within the floor area in Block S150; and then serve a prompt to remove the hazardous object from the floor area to a store associate in Block S160.” (Para 0075) wherein the automatic sweeper stops and generates a warning message and waits for a user to process, if the processing circuit determines that the abnormal region exists. (Tiwari) – “The robotic system (or the remote computer system) can then immediately notify a store associate of the location and characteristics of the spill (e.g., spill size, predicted spill material, suggested cleanup material, cleanup urgency, etc.), such as by sending a notification containing these data to a mobile computing device assigned to the store associate. Concurrently, the robotic system can halt near the detected spill (e.g., to function as a caution cone) and warn nearby patrons of the spill, such as by rendering a warning on an integrated display, activating an integrated strobe light, and/or outputting an audible alarm through an integrated speaker.” (Para 0014) “In one implementation, the robotic system can identify a location adjacent a boundary of the spill and hold at this location in order to block physical access to the spill and thus encourage patrons to pass around the spill rather than traversing the spill. For example, the robotic system can: detect a perimeter of the spilled fluid in the area of the floor nearby, such as by extracting a perimeter of the thermal gradient depicting the spill from a thermal image of this floor area; autonomously navigate toward the perimeter of the fluid (but avoid crossing this perimeter into the spilled fluid); hold in the position proximal the perimeter of the spilled fluid in order to physically block access to this floor area; and output an indicator of presence of the spill, such as described below. The robotic system can then remain in this position near the spill until the robotic system (or the remote computer system): receives confirmation from a store associate that the spill has been cleared; receives confirmation from the store associate that an alternative warning infrastructure (e.g., a “Wet Floor” sign) has been placed near the spill; receives a prompt from the store associate to move away from the spill to enable manual cleanup; directly detects removal of the spill (e.g., based on absence of a thermal gradient in the corresponding location in a later thermal image); or directly detects placement of an alternative warning infrastructure nearby. Alternatively, the robotic system can autonomously navigate back and forth across an aisle in which the spill was detected in order to block access to the aisle.” (Para 0096) Therefore, it would be obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the obstacle recognition system of Ebrahimi Afouzi with the method for detecting and responding to hazards of Tiwari. One of ordinary skill in the art would have been motivated to make these modifications with a reasonable expectation of success in order to “identify and prompt any other user to clear any other obstacle on the floor of the store or otherwise obstructing (or limiting) passage” (Tiwari Para 0018) Claim 3: Ebrahimi Afrouzi in combination with the references relied upon in Claim 1 teach those respective limitations. Ebrahimi Afrouzi further teaches: wherein the processing circuit determines the abnormal region according to image features of thefirst sensing images or the second sensing images. (Ebrahimi Afrouzi) – “the robot may include sensors to detect or sense objects, acceleration, angular and linear movement, temperature, humidity, water, pollution, particles in the air, supplied power, proximity, external motion, device motion, sound signals, ultrasound signals, light signals, fire, smoke, carbon monoxide, global-positioning-satellite (GPS) signals, radio-frequency (RF) signals, other electromagnetic signals or fields, visual features, textures, optical character recognition (OCR) signals, spectrum meters, and the like.” (Para 0239) “In embodiments, a kernel may consist of multiple layers of feature maps, each designed to detect a different feature. All neurons in a single feature map share the same parameters and allow the network to recognize a feature pattern regardless of where the feature pattern is within the input. This is important for object detection.” (Para 0274) “In some embodiments, the processor identifies the object based on the characteristics and features of the object. Characteristics of the object, for example, may include shape, color, size” (Para 0382) Examiner Note: Image features are recited with a high degree of generality. Per BRI, image features may correspond with any feature or data related to the image. Claim 4: Ebrahimi Afrouzi in combination with the references relied upon in Claim 3 teach those respective limitations. Ebrahimi Afrouzi further teaches: wherein the processing circuit determines a plurality of candidate abnormal regions according to the image features, (Ebrahimi Afrouzi) – “In some embodiments, the processor identifies the object based on the characteristics and features of the object. Characteristics of the object, for example, may include shape, color, size” (Para 0382) “the processor selects features to be detected from a group of candidates. Each feature type may comprise multiple candidates of that type. Feature types may include, for example, a corner, a blob, an arc, a circle, an edge, a line, etc. Each feature type may have a best candidate and multiple runner up candidates. Selections of features to be detected from a group of candidates may be determined based on any of pixel intensities, pixel intensity derivative magnitude, and direction of pixel intensity gradients of groups of pixels, and inter-relations among a group of pixels with other groups of pixels.” (Para 0462) Ebrahimi Afouzi does not explicitly teach: determines the candidate abnormal region which has repeated image features as a normal region, and determines the candidate abnormal region which does not have the repeated image features as the abnormal region. However, Tiwari, in the same field of endeavor of abnormal region detection, teaches: determines the candidate abnormal region which has repeated image features as a normal region, and determines the candidate abnormal region which does not have the repeated image features as the abnormal region. (Tiwari) – “Block S110 of the method S100 recites recording a thermal image of an area of a floor of the store; and Block S112 of the method S100 recites detecting a thermal gradient in the thermal image. Generally, in Block S110, the robotic system records thermal images via the thermographic camera (or infrared or other thermal sensor) integrated into the robotic system throughout operation (e.g., during an inventory tracking routine or spill detection routine). In Block S112, the robotic system (or the remote computer system) processes these thermal images to identify thermal gradients (i.e., temperature disparities, temperature discontinuities) in regions of these thermal images that intersect a floor surface or ground plane depicted in these thermal images, such as shown in FIG. 2. The robotic system (or the remote computer system) can then interpret such thermal gradients as either solid objects (boxes, cans, bottles, grapes, apples) or amorphous objects (e.g., fluids, liquids) based on additional color and/or height data collected through the color camera(s) and depth sensor(s) integrated into the robotic system, as described above.” (Para 0046) “the robotic system regularly records thermal images, such as at a rate of 2 Hz, via a forward-facing thermographic camera throughout operation in Block S110. Concurrently, the robotic system can record a depth map through the depth sensor (e.g., at a rate of 10 Hz) and/or record a color image through the color camera (e.g., at a rate of 20 Hz). As the robotic system records thermal images, the robotic system can locally scan a thermal image for a thermal gradient, temperature disparity, reflectivity disparity, or emissivity disparity, etc. —proximal a ground plane (i.e., the floor) depicted in the thermal image—which may indicate presence of two different materials in the field of view of the thermographic camera (e.g., a floor material and a liquid). Generally, when a substance (e.g., oil, water, soda, tomato sauce, or other liquid; grains, grapes, pasta, or another non-liquid substance) has spilled onto a surface of the floor of the store, this substance may present as a temperature disparity (or temperature discontinuity, thermal gradient)—relative to a background floor material—in a thermal image of this surface of the floor” (Para 0047) Therefore, it would be obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the obstacle recognition system of Ebrahimi Afouzi with the method for detecting and responding to hazards of Tiwari. One of ordinary skill in the art would have been motivated to make these modifications with a reasonable expectation of success in order to “identify and prompt any other user to clear any other obstacle on the floor of the store or otherwise obstructing (or limiting) passage” (Tiwari Para 0018) Claim 5: Ebrahimi Afrouzi in combination with the references relied upon in Claim 4 teach those respective limitations. Ebrahimi Afrouzi further teaches: wherein the image features comprises colors. (Ebrahimi Afrouzi) – “In some embodiments, the processor identifies the object based on the characteristics and features of the object. Characteristics of the object, for example, may include shape, color, size” (Para 0382) Claim 6: Cancelled Claim 7: Ebrahimi Afrouzi in combination with the references relied upon in Claim 1 teach those respective limitations. Ebrahimi Afrouzi further teaches: wherein the processing circuit further determines a type of the abnormal region according to a difference level and a difference distribution of light intensities of reflected light received by the first image sensor or the second image sensor. (Ebrahimi Afrouzi) – “In yet another example, a RGB camera is set up with a structured light such that it is time multiplexed and synched…In a first time slot, the processor of the robot detects a set of corners 1, 2 and 3 and TV 14800 as features based on sensor data. In a next time slot, the area is illuminated and the processor of the robot extracts L2 norm distances to a plane. With more sophistication, this may be performed with 3D data. In addition to the use of structured light in extracting distance, the structured light may provide an enhanced clear indication of corners. For instance, a grid like structured light projected onto a wall with corners is distorted at the corners. The distorted structured light extracted from the RGB image based on examining a change of intensity and filters correlates with corners.” (Para 0441) “In some embodiments, a set of objects are included in a dictionary of objects of interest… Within the green channel, higher intensities are observed for objects perceived to be green in color. For example, a group of high intensity pixels surrounded by pixels of low intensity pixels in the green channel may be detected in an image as an object with green color. Some embodiments may adjust a certain intensity requirement of pixels, a certain intensity requirement of pixels when surrounded by pixels of a certain intensity, relative intensity of the pixels in relation to the surrounding pixels, etc.” (Para 0452) “In some embodiments, the processor selects features to be detected from a group of candidates. Each feature type may comprise multiple candidates of that type. Feature types may include, for example, a corner, a blob, an arc, a circle, an edge, a line, etc. Each feature type may have a best candidate and multiple runner up candidates. Selections of features to be detected from a group of candidates may be determined based on any of pixel intensities, pixel intensity derivative magnitude, and direction of pixel intensity gradients of groups of pixels, and inter-relations among a group of pixels with other groups of pixels.” (Para 0462) Claim 8: Ebrahimi Afrouzi in combination with the references relied upon in Claim 7 teach those respective limitations. Ebrahimi Afrouzi further teaches: wherein the processing circuit further determines the abnormal region is an unexpected object if the light intensities non-continuously vary and the difference level is larger than a difference threshold. (Ebrahimi Afrouzi) – “In some embodiments, a set of objects are included in a dictionary of objects of interest… Within the green channel, higher intensities are observed for objects perceived to be green in color. For example, a group of high intensity pixels surrounded by pixels of low intensity pixels in the green channel may be detected in an image as an object with green color. Some embodiments may adjust a certain intensity requirement of pixels, a certain intensity requirement of pixels when surrounded by pixels of a certain intensity, relative intensity of the pixels in relation to the surrounding pixels, etc.” (Para 0452) “In some embodiments, the processor selects features to be detected from a group of candidates. Each feature type may comprise multiple candidates of that type. Feature types may include, for example, a corner, a blob, an arc, a circle, an edge, a line, etc. Each feature type may have a best candidate and multiple runner up candidates. Selections of features to be detected from a group of candidates may be determined based on any of pixel intensities, pixel intensity derivative magnitude, and direction of pixel intensity gradients of groups of pixels, and inter-relations among a group of pixels with other groups of pixels.” (Para 0462) Claim 9: Ebrahimi Afrouzi in combination with the references relied upon in Claim 7 teach those respective limitations. Ebrahimi Afrouzi further teaches: wherein the processing circuit further determines the abnormal region is deformation of the surface if the light intensities continuously vary and the difference level is smaller than a difference threshold. (Ebrahimi Afrouzi) – “In some embodiments, a set of objects are included in a dictionary of objects of interest… Within the green channel, higher intensities are observed for objects perceived to be green in color. For example, a group of high intensity pixels surrounded by pixels of low intensity pixels in the green channel may be detected in an image as an object with green color. Some embodiments may adjust a certain intensity requirement of pixels, a certain intensity requirement of pixels when surrounded by pixels of a certain intensity, relative intensity of the pixels in relation to the surrounding pixels, etc.” (Para 0452) “In some embodiments, the processor selects features to be detected from a group of candidates. Each feature type may comprise multiple candidates of that type. Feature types may include, for example, a corner, a blob, an arc, a circle, an edge, a line, etc. Each feature type may have a best candidate and multiple runner up candidates. Selections of features to be detected from a group of candidates may be determined based on any of pixel intensities, pixel intensity derivative magnitude, and direction of pixel intensity gradients of groups of pixels, and inter-relations among a group of pixels with other groups of pixels.” (Para 0462) “In some embodiments, data from multiple classes of sensors may be used in determining traversability of an area… In some embodiments, the processor may extract a driving surface plane from an image without illumination. In some embodiments, the driving surface plane may be highly weighted in the determination of the traversability of an area. In some embodiments, a flat driving surface may appear as a uniform color in captured images. In some embodiments, obstacles, cliffs, holes, walls, etc. may appear as different textures in captured images. In some embodiments, the processor may distinguish the driving surface from other objects, such as walls, ceilings, and other flat and smooth surfaces, given the expected angle of the driving surface with respect to the camera.” (Para 1151) “A second sensor may measure a small negative height for the same area that may increase the probability of traversability of the area and the area may be marked as traversable. However, another sensor reading indicating a high negative height at the same area decreases the probability of traversability of the area. When a probability of traversability of an area reaches below a threshold the area may be marked as a high risk coverage area.” (Para 1152) “In some embodiments, derivative of pixel values of two neighboring pixels of the image (e.g., the change in pixel value between two neighboring pixels) may correspond to traversability from one cell to the neighboring cell. For example, a hard floor of a basement of a building may have a value of zero for height, a carpet of the basement may have a value of one for height, a ceiling of the basement may have a value of 18 for height, and a ground floor of the building may have a value of 20 for height. The transition from the hard floor with a height of zero and the carpet with a height of one may be deemed a traversable path.” (Para 1332) Claim 10: Ebrahimi Afrouzi in combination with the references relied upon in Claim 1 teach those respective limitations. Ebrahimi Afrouzi does not explicitly teach the following limitations in full. Tiwari further teaches: wherein the processing circuit further determines if image features of the first sensing images or the second sensing images are continuous when the protruding level is lower than the protruding threshold; (Tiwari) – “in response to detecting a thermal gradient in the thermal image and detecting a cospatial height gradient in a depth map (e.g., presence of a surface offset above a ground plane—projected onto or defined in the depth map—by more than the minimum height threshold), the robotic system can predict presence of a solid object in the floor area.” (Para 0074) “the robotic system can then identify presence of clear fluid in the floor area in response to: detecting the thermal gradient in the thermal image; detecting absence of the height gradient (e.g., absence of a surface offset above a ground plane—projected onto or defined in the depth map—by more than a minimum height threshold of one centimeter) in the cospatial region of the depth map; and detecting absence of a color gradient in the cospatial region of the depth map.” (Para 0071) “In particular, after isolating the region of the depth map that corresponds to the region of interest in the thermal image, the robotic system can scan the region of the depth map for a height gradient, which may indicate presence of a solid object on the area of the floor adjacent the robotic system. In one implementation, the robotic system can execute edge detection, blob detection, and/or other computer vision techniques to delineate a height gradient(s) in the region of the depth map.…In response to detecting height gradients and/or surfaces within this area of interest within the depth map, the robotic system can identify presence of a solid object (e.g., a box or a grape) offset above the ground floor.” (Para 0062) “Block S110 of the method S100 recites recording a thermal image of an area of a floor of the store; and Block S112 of the method S100 recites detecting a thermal gradient in the thermal image…The robotic system (or the remote computer system) can then interpret such thermal gradients as either solid objects (boxes, cans, bottles, grapes, apples) or amorphous objects (e.g., fluids, liquids) based on additional color and/or height data collected through the color camera(s) and depth sensor(s) integrated into the robotic system, as described above.” (Para 0046) wherein the processing circuit further determines colors of thefirst sensing images or the second sensing images if the image features are non-continuous and determines the abnormal region according to the colors. (Tiwari) – “In this example, the robotic system (or the remote computer system) can also scan a color image recorded at approximately the first time by the robotic system for a color gradient (or color disparity) in a region cospatial with the thermal gradient. If the robotic system detects such a color gradient, the robotic system can identify the spill as visually discernible (e.g., brown soda, red tomato sauce). However, if the robotic system detects lack of such a color gradient, the robotic system can identify the spill as not visually discernible or “clear” (e.g., water, oil).” (Para 0013) “Block S132 of the method S100 recites scanning a region of the color image, corresponding to the thermal gradient in the thermal image, for a color gradient. Generally, the robotic system records color images or other photographic data of a field near (e.g., ahead of) the robotic system via an integrated color camera in Block S130 and then processes the color camera to detect an object cospatial with a thermal disparity and/or a height disparity detected in concurrent thermal and depth images in Block S132.” (Para 0065) “in response to detecting presence of a static thermal gradient, presence of a static cospatial height gradient greater than the maximum height threshold, and presence of a static cospatial color gradient over a sequence of thermal, depth, and color images of a floor area, the robotic system can identify a box, pallet, or temporary display in this floor area.” (Para 0077) “In particular, after isolating the region of the depth map that corresponds to the region of interest in the thermal image, the robotic system can scan the region of the depth map for a height gradient, which may indicate presence of a solid object on the area of the floor adjacent the robotic system. In one implementation, the robotic system can execute edge detection, blob detection, and/or other computer vision techniques to delineate a height gradient(s) in the region of the depth map.…In response to detecting height gradients and/or surfaces within this area of interest within the depth map, the robotic system can identify presence of a solid object (e.g., a box or a grape) offset above the ground floor.” (Para 0062) “Block S110 of the method S100 recites recording a thermal image of an area of a floor of the store; and Block S112 of the method S100 recites detecting a thermal gradient in the thermal image…The robotic system (or the remote computer system) can then interpret such thermal gradients as either solid objects (boxes, cans, bottles, grapes, apples) or amorphous objects (e.g., fluids, liquids) based on additional color and/or height data collected through the color camera(s) and depth sensor(s) integrated into the robotic system, as described above.” (Para 0046) Therefore, it would be obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the obstacle recognition system of Ebrahimi Afouzi with the method for detecting and responding to hazards of Tiwari. One of ordinary skill in the art would have been motivated to make these modifications with a reasonable expectation of success in order to “identify and prompt any other user to clear any other obstacle on the floor of the store or otherwise obstructing (or limiting) passage” (Tiwari Para 0018) Claim 11: Ebrahimi Afrouzi in combination with the references relied upon in Claim 10 teach those respective limitations. Ebrahimi Afrouzi further teaches: (Ebrahimi Afrouzi) – “In embodiments, the camera of the robot (e.g., depth camera or other camera) may be positioned in any area of the robot and in various orientations. For example, sensors may be positioned on a back, a front, a side, a bottom, and/or a top of the robot. Also, sensors may be oriented upwards, downwards, sideways, and/or in any specified angle.” (Para 0591) “Some embodiments may include a light source, such as laser, positioned at an angle wit
Read full office action

Prosecution Timeline

May 05, 2022
Application Filed
Jun 28, 2024
Non-Final Rejection — §103
Oct 07, 2024
Response Filed
Dec 10, 2024
Final Rejection — §103
Mar 14, 2025
Request for Continued Examination
Mar 14, 2025
Response after Non-Final Action
Mar 19, 2025
Non-Final Rejection — §103
Jun 20, 2025
Response Filed
Sep 08, 2025
Final Rejection — §103
Dec 07, 2025
Request for Continued Examination
Dec 09, 2025
Response after Non-Final Action
Dec 09, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597267
METHOD AND SYSTEM FOR MULTI-OBJECT TRACKING AND NAVIGATION WITHOUT PRE-SEQUENCING
2y 5m to grant Granted Apr 07, 2026
Patent 12589756
ASYMMETRIC FAILSAFE SYSTEM ARCHITECTURE
2y 5m to grant Granted Mar 31, 2026
Patent 12590813
NAVIGATION INTERFACE DISPLAY METHOD AND APPARATUS, TERMINAL, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12578204
Method and Apparatus for Automatically Marking U-Turn Lane Line, Computer-Readable Storage Medium, and Map
2y 5m to grant Granted Mar 17, 2026
Patent 12570321
AUTONOMOUS DRIVING DEVICE, METHOD AND SYSTEM
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
54%
Grant Probability
99%
With Interview (+52.9%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 101 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month