Prosecution Insights
Last updated: April 19, 2026
Application No. 18/230,685

METHOD AND SYSTEM FOR DETECTING FLOOR STAINS USING SURROUND VIEW IMAGES

Final Rejection §101§103§112
Filed
Aug 07, 2023
Examiner
RIVERA, CARLOS A
Art Unit
3723
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
L&T Technology Services Limited
OA Round
2 (Final)
77%
Grant Probability
Favorable
3-4
OA Rounds
3y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
386 granted / 501 resolved
+7.0% vs TC avg
Strong +29% interview lift
Without
With
+29.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
38 currently pending
Career history
539
Total Applications
across all art units

Statute-Specific Performance

§101
0.6%
-39.4% vs TC avg
§103
42.9%
+2.9% vs TC avg
§102
25.5%
-14.5% vs TC avg
§112
25.7%
-14.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 501 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Response to Arguments Drawings Applicant did not address the Drawings Objections in the previous Office action. While there is a page titled Amendment to the Drawings, there is no actual replacement drawings in the document. Claim Interpretation under 35 USC 112 (f) Applicant argues in pages 22-24 of the Remarks that the “floor cleaning device” is structurally support in the specification and should not be construed under 35 USC 112 (f). The Examiner respectfully disagrees. MPEP 2181 I state: [E]xaminers will apply 35 U.S.C. 112(f) to a claim limitation if it meets the following 3-prong analysis: (A) the claim limitation uses the term "means" or "step" or a term used as a substitute for "means" that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; in this case the claim uses the generic term “device”; (B) the term "means" or "step" or the generic placeholder is modified by functional language, typically, but not always linked by the transition word "for" (e.g., "means for") or another linking word or phrase, such as "configured to" or "so that"; in this case the term “device” is modified by functional language, i.e., floor cleaning; and (C) the term "means" or "step" or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function; in this case, the term “device” is not modified by sufficient structure, materials, or acts for performing the claimed function. The fact that the specification discloses sufficient structure only precludes the Examiner to reject the claim under 35 USC 112(a) and 35 USC 112(b) (see MPEP 2181 II DESCRIPTION NECESSARY TO SUPPORT A CLAIM LIMITATION WHICH INVOKES 35 U.S.C. 112(f) and MPEP 2181 I A The Corresponding Structure Must Be Disclosed In the Specification Itself in a Way That One Skilled In the Art Will Understand What Structure Will Perform the Recited Function), but has nothing to do with the fact that the claim does NOT recite sufficient structure. This is NOT a rejection but a claim interpretation. Rejections under 35 USC 112(b) Applicant's arguments on pages 21-22 of the Remarks have been fully considered but they are partially persuasive. The lack of antecedent basis has been withdrawn. However, the rejection with respect to the redefining of the positions was not addressed. Rejections under 35 USC 101 Applicant's arguments have been fully considered but they are not persuasive. Applicant argues on page 8 of the Remarks that the Office action alleges that the claims are directed to an abstract idea of “commercial or legal interactions”. This is not the case; therefore, the argument is moot. Applicant argues on pages 10-11 of the Remarks: The claimed invention is fully implemented within a physical floor-cleaning device that performs coordinated sensing, analysis, and cleaning actions. As described in paragraphs [0041]-[0043] and [0046]-[0049] of the as filed specification, the system not only detects stains but also determines their precise dimensions, location, and distance relative to the cleaning device. These detected stain attributes are directly utilized to control real-world operational functions of the cleaning equipment, such as adjusting cleaning intensity, activating brushes at the correct moment, or modifying the cleaning path to pass over floor defects (see paragraphs [0048]-[0055] of the as filed specification). Thus, the system produces outputs that are not merely abstract data, but actionable control instructions that directly improve cleaning performance. (Emphasis added) Applicant’s analysis is rooted in that the claim is not an abstract idea because there is a specific physical implementation as described in the specification. However, the MPEP establishes that before the abstract idea analysis, the examiner must begin with a broadest reasonable interpretation (BRI) analysis. Furthermore, in a BRI analysis, it is improper to import claim limitations from the specifications. See the following excerpts from the MPEP. MPEP 2106 II ESTABLISH BROADEST REASONABLE INTERPRETATION OF CLAIM AS A WHOLE, states: It is essential that the broadest reasonable interpretation (BRI) of the claim be established prior to examining a claim for eligibility. The BRI sets the boundaries of the coverage sought by the claim and will influence whether the claim seeks to cover subject matter that is beyond the four statutory categories or encompasses subject matter that falls within the exceptions MPEP 2111.01 II IT IS IMPROPER TO IMPORT CLAIM LIMITATIONS FROM THE SPECIFICATION "Though understanding the claim language may be aided by explanations contained in the written description, it is important not to import into a claim limitations that are not part of the claim. For example, a particular embodiment appearing in the written description may not be read into a claim when the claim language is broader than the embodiment." In this case, “generating… at least one undistorted virtual top view”, “detecting… at least one floor stain”, “wherein a canvas…has a predefined ratio”, and “processing…. the at least one floor stain to extract… an attribute”, under BRI, do not include the physical implementation argued by Applicant, but a mere mental process that can be executed in the human mind. For this reason, this argument is not persuasive. Applicant further argues on page 11 that the invention provides a technological improvement to camera-based robotic floor cleaning systems. Again, Applicant resorts back to the specification: As detailed in paragraphs [0036]-[0039] and [0052]-[0054] of the as filed specification, the disclosed multi-camera processing generates an undistorted virtual top-view (bird's-eye view) image from wide-angle fisheye sources through un-distortion, homography, and perspective transformation. This process reduces optical distortion and expands spatial coverage compared to conventional systems, enabling more accurate stain recognition and localization in real-world dimensions. (Emphasis added) As discussed above, while the argument of technological improvement may or may not be accurate, the claims are still too broad. It is a stretch to state that generating at least one undistorted virtual top view that corresponds to a surround view image of the floor surface equates to an image from wide-angle fisheye sources through un-distortion, homography, and perspective transformation that reduces optical distortion and expands spatial coverage compared to conventional systems, enabling more accurate stain recognition and localization in real-world dimensions. Since these limitations are not in the claims, the argument is not persuasive and the rejections still proper. Applicant further argues STEP 2 PRONG TWO in pages 12-17. With regards the technological improvement argument, it is noted that the specific limitations of generating an undistorted virtual top view, detecting a stain, and a canvas area with a predefined ratio, only describe what was known in the art (see rejections below). No technological advancement is seen in the broad limitations of claim 1. With respect to the limitation of “by a floor cleaning device” or “using a first pre-trained machine learning model”, as discussed in the rejections below, this is merely interpreted as applying the abstract idea and reciting the words "apply it" (or an equivalent). Applicant further argues STEP 2B in pages 17-21: The claims specifically require the use of a non-generic, custom- configured floor-cleaning device equipped with multiple wide-angle image-capturing modules mounted on the exterior of the device body. These modules are structurally and functionally arranged to capture overlapping images of the floor area and generate an undistorted virtual top-view image corresponding to the actual physical floor surface. Additionally, the claims require the use of a first pre-trained machine learning model performing semantic segmentation and classification on the undistorted top-view image using learned visual features such as color, texture, and shape. The model is specifically trained for floor-stain detection and does not rely merely on pixel intensity comparisons but applies high-level feature extraction derived from training data to identify stains, spills, and residues with improved accuracy. The claims further introduce a canvas-area constraint, ensuring that each undistorted top-view image maintains a predefined ratio relative to the real-world floor area covered by the cameras. The structural calibration establishes geometric consistency between image pixels and physical dimensions. The geometric consistency enables precise localization and scale-invariant detection of stains. Moreover, the claimed system processes each detected stain to extract multiple attributes including dimensions, type, distance, and spatial location. The data is then employed within the robotic control subsystem to perform adaptive cleaning operations in real time. These technical elements collectively provide a technological improvement in the functioning of autonomous floor-cleaning systems by integrating calibrated image generation, intelligent stain detection, and context-aware control. The claimed invention transforms raw image data into actionable cleaning instructions without human intervention, enhancing the operational efficiency of the system, spatial awareness and decision-making accuracy. Thus, the coordinated system addresses real-world challenges in robotic perception and cleaning automation that cannot be effectively achieved through conventional means. Accordingly, the pending claims are integrated into a practical application and recite significantly more than any alleged abstract idea. Again, Applicant imports limitations from the specification to make the case of “significantly more”. Under BRI, claim 1 has four steps: “capturing”, “generating”, “detecting”, and “processing”. Of these, the latter three were identified as abstract ideas (mental processes) as discussed above. From MPEP 2106.05 I Evaluating additional elements to determine whether they amount to an inventive concept requires considering them both individually and in combination to ensure that they amount to significantly more than the judicial exception itself. Because this approach considers all claim elements, the Supreme Court has noted that "it is consistent with the general rule that patent claims ‘must be considered as a whole.’" The additional step that can be analyzed for “significantly more” is the “capturing images” step. The limitation states: capturing… a plurality of images of a floor surface using one or more image capturing devices mounted on exterior top sides of a floor cleaning device body aimed in a forward drive direction, wherein the plurality of images correspond to a plurality of wide-angle view images The element that is claimed is the wide-angle view capturing device and it has been construed as a well-understood, routing, and conventional element, as evidenced by Ritchey US 2008/0007617, ¶18 “FIG. 2 illustrates a conventional wide-angle panoramic camera” and ¶ 16 “Finally, several specific applications are put forth in the present invention for using the panoramic camera, processing, and display systems of the present invention as part of a vehicular observation system, …, and finally in a robotic or remotely piloted vehicle system.”. For this reason, the argument is not persuasive, and the rejection still valid. Rejections under 35 USC 103 Applicant argues on page 25-26 that Kumar does not teach generating at least one undistorted virtual top view image corresponding to a surround view image. Applicant argues on pages 26-27 that Herron disclose only a single camera distortion correction by contrast, the claimed invention takes multiple wide-angle images. In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). Applicant argues in pages 27 that neither teaches “top-down composite of the floor surface itself”, “multi camera fusion”. Kumar teaches in ¶93 “The camera system 420 comprises one or more cameras that capture images and or video signals as visual data about the environment”. Herron teaches in ¶81-82 “Where multiple cameras are used, their field of view overlaps, and that overlap is incorporated into the alignment of the segments. Multiple cameras can also provide redundancy, in the event that the lens of one camera becomes dirty or the images are otherwise blurred, overlapping or subsequent images of the other camera can be weighted with higher quality and used instead…The image quality can be weighted, with a higher quality image replacing a lower quality one, whether the images re from the same camera or multiple cameras. Weighting factors include, for example, speed of the robot when the image was captured, vibration, angle (tilt), agreement with overlapping portions of other segments, closeness of the segment to the camera and illumination. An image taken at a slower robot speed can have a higher weighting than an image taken at a faster speed. An image taken closer to the robot can have a higher weighting than an image taken farther from the robot. An image taken during a tilt or vibration event can have a lower weighting than an image taken without such tilt or vibration, or with less tilt or vibration. Images taken at higher illumination can be weighted higher, up to a level where too much illumination degrades the quality of the image.” Therefore, the arguments are not persuasive. Finally, Applicant further argues on pages 31-33, more limitations explained by the specification: The claimed invention delivers a novel and superior technical framework integrating: multi-camera image registration and fusion, geometric calibration and canvas ratio enforcement, and semantic deep learning-based stain detection….the claimed invention introduces a technically advanced stain detection framework operating on an undistorted virtual top-view image synthesized from multiple wide angle cameras. The fused image constitutes a geometrically accurate and unified top down representation of the entire floor area, enabling precise visual interpretation. Stain detection is carried out by a pre trained deep learning model, such as a convolutional neural network trained on annotated examples of various stain types, floor textures, and contamination patterns. The model performs semantic segmentation and classification using learned image features including color, texture, shape, and spatial context… By contrast, the claimed invention performs multi-camera geometric fusion to create a corrected and spatially normalized top-view image, and applies an advanced, pre-trained deep learning model to achieve context-aware, reliable stain detection in real time… In Contrast to Kumar, the claimed invention performs a more advanced and structured analysis after detecting a floor stain. The system processes the detected stain to extract detailed attributes such as the stain's dimensions, the type of strain from a predefined set of stain categories, the distance of the stain from each camera, and the stain's exact location within the floor area. The extracted attributes provide quantitative and semantic information for supporting intelligent decision-making in cleaning operations...Furthermore, the multi-attribute analysis enables the device to assess stain severity, determine appropriate cleaning intensity, and optimize the cleaning path. Unlike the focus of Kumar to perform only binary labeling of "clean" or "dirty" pixels, the claimed invention derives multiple, context-aware stain properties from the processed image data. The data allows the system to operate with higher accuracy and adaptive cleaning logic…. Kumar does not identify or calculate specific parameters of stains such as size, type, distance, or spatial location. The approach of Kumar does not involve feature extraction or contextual analysis of stain attributes. Kumar focuses only on generating labeled data for neural network training. The claimed invention integrates advanced image processing and machine learning to extract multiple quantitative and qualitative attributes of each stain, enabling precise spatial correlation and adaptive cleaning control. This approach reflects a higher level of intelligence and technical sophistication does not present in the system of Kumar. In conclusion, the Examiner concedes that there are differences between the prior art and the invention as disclosed in the specification. However, throughout the Remarks, Applicant has improperly tried to import limitations from the specification, as well as incorrectly argued that the claims somehow imply these limitations. The Examiner respectfully proposes for the following response, adding claim limitations that clearly limit the claims to the difference between the invention and the prior art, as expressed in the current Remarks, while explaining these differences in the Remarks with concise arguments pointing to the specific claim limitations. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” -that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: floor cleaning device disclosed as element 102 and equivalents thereof. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Drawings The drawings are objected to under 37 CFR 1.83(a). The drawings must show every feature of the invention specified in the claims. Therefore, the image capturing devices mounted on exterior top sides of the floor cleaning device body aimed in a forward drive direction must be shown or the feature(s) canceled from the claim(s). No new matter should be entered. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-7 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. In claim 1, the recitation of “image capturing devices mounted on exterior top sides of the floor cleaning device body aimed in a forward drive direction” contradicts figure 1 of the drawings. It is unclear if Applicant is redefining the position as seen in the figures. Claims 2-7 are rejected by dependency. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-6 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claim 1, STEP 2A PRONG ONE: The claim(s) recite(s) “generating at least one undistorted virtual top view”, “detecting at least one floor stain”, and “processing the at least one floor stain to extract an attribute”. All these limitations can be construed as mental processes that can be performed by the human mind. STEP 2A PRONG TWO: These judicial exceptions are not integrated into a practical application because the limitation of “by a floor cleaning device” is interpreted as applying the abstract idea and merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea has been identified by the courts to not integrate a judicial exception into a practical application (MPEP 2106.04 (d) I). Also, the recitation of “capturing images” only adds insignificant extra-solution activity to the judicial exception. STEP 2B: The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the limitation of “capturing images” is well understood, routine, and conventional in the art of image analysis. Claims 2 and 5, STEP 2A PRONG ONE: The claim(s) recite(s) “blending the plurality of bird eye vies images”, and “calculating the distance between the bottom edge of the surround view image and the lower edge of the pixel boundary of the at least one floor stain”. These limitations can be construed as mathematical relationships or calculations as explained in MPEP 2106.04(a)(2) IA, and MPEP 2106.04(a)(2) IC. Claims 2-5, STEP 2A PRONG TWO: The limitations in claims 2-5 recite “generate a plurality of bird eye view images”, using a trained CNN, using a SVM, and “identifying a pixel boundary”. These limitations provide nothing more than mere instructions to implement an abstract idea on a generic computer and indicate a field of use or technological environment in which the judicial exception is performed. See MPEP 2106.05(f). These limitations only recite either an input necessary for analysis or the outcome of analyzing stain images. They do not include any details about how the input and/or the analysis is accomplished. Claims 2-5, STEP 2B: The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the limitations are well understood, routine, and conventional in the art of image analysis. Claim 6, STEP 2A PRONG ONE: The claim(s) recite(s) “detecting an object”, “processing the object”, and “generate an alarm”. All these limitations can be construed as mental processes that can be performed by the human mind. Even “generate an alarm” can be construed under BRI as generating a mental alarm. Claim 6, STEP 2A PRONG TWO AND STEP 2B are similar than above. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-2, 5-7 are rejected under 35 U.S.C. 103 as being unpatentable over Kumar US 2021/0245081 in view of Herron US 2019/0183310. Re claim 1, Kumar discloses a method for detecting a floor stain [¶16, “[t]he autonomous vacuum may additionally or alternatively use a neural network to detect dirt within the environment”] the method comprising: capturing, by a floor cleaning device [fig. 7], a plurality of images of a floor surface using one or more image capturing devices 710 mounted on exterior top sides of the floor cleaning device body aimed in a forward drive direction [see 112 (b) rejection above], wherein a plurality of images corresponds to a plurality of wide-angle view images [stereo cameras 710]; generating a top view image that corresponds to a surround view image of the floor surface [fig. 9]; detecting, by the floor cleaning device, at least one floor stain from the at least one virtual top view image of the floor surface using a first pre-trained machine learning model [¶16, “[t]he autonomous vacuum may additionally or alternatively use a neural network to detect dirt within the environment”], wherein a canvas area of the at least one undistorted virtual top view image has a predefined ratio relative to a floor area covered by the one or more image capturing devices [fig. 9]; processing, by the floor cleaning device, the at least one floor stain to extract at least one floor stain attribute from one or more floor stain attributes, wherein the one or more floor stain attributes comprise: dimensions of the floor stain, a floor stain type from a set of floor stain types, a distance of the at least one floor stain from each of the one or more image capturing devices, and a location of the at least one floor stain in the floor area [¶120, “the detection module 530 may receive visual data of the environment as the autonomous vacuum 100 clean the environment. The detection module 530 may pair the visual data to locations of the autonomous vacuum 100 determined by the mapping module 500 as the autonomous vacuum moved to clean. The detection module 530 estimates correspondence between the visual data to pair visual data of the same areas together based on the locations. The detection module 530 may compare the paired images in the RGB color space (or any suitable color or high-dimensional space that may be used to compute distance) to determine where the areas were clean or dirty and label the visual data as “clean” or “dirty” based on the comparison”]. Kumar does not specifically teach generating at least one undistorted virtual top view image of the floor surface using the plurality of wide angle images captured of the floor surface.. Herron teaches generating, by the floor cleaning device, at least one undistorted virtual top view image of the floor surface using the plurality of images captured of the floor surface, wherein the at least one undistorted virtual top view image corresponds to a surround view image of the floor surface [¶ 6], “transforming from the robot perspective view to the planar view, a distortion algorithm is applied to correct the image distortion due to the lens. The transformation from the robot perspective view to the planar view utilizes the known manufactured height of the camera off the floor and known downward angle of the camera to correlate an image pixel position with a planar floor position”]. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the transforming step of Herron with the method of Kumar in order to yield the predictable result of creating an undistorted image for analyzing stains. Re claim 2, Kumar and Herron further teach wherein generating the surround view image of the floor surface further comprises: generating a plurality of bird eye view images from the plurality of wide-angle view images [¶176 of Kumar]; and blending the plurality of bird eye view images to generate the surround view image of the floor surface, wherein the surround view image of the floor surface [Abstract of Herron, “a method is disclosed for a robot (e.g., cleaning robot) to produce a bird's eye view (planar) map by transforming from a robot camera view and stitching together images tagged by location”]. The system taught by Kumar and Herron would be capable of facilitating distance calculation between the floor cleaning device and the floor stain in a metric unit, and wherein the metric unit corresponds to one of: centimeter, millimeter and meter unit. Re claim 5, Kumar and Herron further teach identifying a pixel boundary of the at least one floor stain in the surround view image of the floor surface in pixels for locating the at least one stain, wherein the pixels of the pixel boundary in the surround view image are calibrated to a real world distance with respect to calibration of each of the one or more image capturing devices; calculating the pixel distance of the at least one floor stain and the capturing device. [¶120 of Kumar, “detection module 530 estimates correspondence between the visual data to pair visual data of the same areas together based on the locations. The detection module 530 may compare the paired images in the RGB color space (or any suitable color or high-dimensional space that may be used to compute distance) to determine where the areas were clean or dirty and label the visual data as “clean” or “dirty” based on the comparison…The detection module 530 may analyze the surface in the visual data pixel-by-pixel for pixels that do not match the pixels of the surface type of the area and label pixels that do not match as “dirty” and pixels that do match as “clean.” The detection module 530 trains the neural network on the labeled visual data to detect dirt in the environment”]; and [Herron in fig. 9 and ¶69, “FIG. 9 is a diagram of cleaning robot with a camera illustrating the calculated planar position of a point in the camera's image according to an embodiment. Cleaning robot 902 has a camera 904 mounted in the front. The camera has a field of view, illustrated by the area between upward line 916 and downward line 918. The camera is mounted a known distance 906 off a floor or other supporting surface 908. For any particular pixel in an image in the camera's field of view, the planar position can be calculated from the position and the camera height 906. For example, point 912 is viewed by the camera at a pixel position illustrated by where a line 914 contacts the camera. This pixel in the image can then be associated with a position corresponding to a center distance 910, from the camera to point 912 on floor 908”…¶72, “[f]rom that known position, and the known position of the robot cleaner, the position of pixels in the image can be calculated”]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to specifically calculate the distance between the bottom edge of the surround view image and the lower edge of the pixel boundary of the at least one floor stain in order to yield the predictable result of determining a distance between the robot cleaner and the stain. Re claim 6, Kumar and Herron further teach detecting an object in surround view image corresponding to vicinity of the floor cleaning device; processing the object to extract at least one object attribute, wherein the object attribute comprises at least one of: a type of object and distance of the object from each of the one or more image capturing devices; and generate an alarm, based on the distance of the object from each of the one or more image capturing devices above a predefined threshold value [¶108 of Kumar, “[t]he immediate level contains a mapping of objects within a threshold radius of the autonomous vacuum 100. The threshold radius may be set at a predetermined distance”…¶129 of Kumar, “[t]he navigation module 560 uses the immediate level of the map to determine how to navigate the environment to execute cleaning tasks on the task list. The immediate level describes the locations of objects within a certain vicinity of the autonomous vacuum 100, such as within the field of view of each camera in the camera system 420. These objects may pose as obstacles for the autonomous vacuum 100, which may move around the objects or move the objects out of its way… The navigation module 560 receives the first cleaning task in the task list database 550, which includes a location of the mess associated with the cleaning task. Based on the location determined from localization and the objects in the immediate level, the navigation module 100 determines a path to the location of the mess. In some embodiments, the navigation module 560 updates the path if objects in the environment move while the autonomous vacuum 100 is in transit to the mess. Further, the navigation module 560 may set the path to avoid fragile objects in the immediate level (e.g., a flower vase or expensive rug)”…¶150 of Kumar, “[t]he mapping module 500 further analyzes the visual data to determine the objects in the environment…Some objects may be barriers that define a room or obstacles that the autonomous vacuum 100 may need to remove, move, or go around, such as a pile of books. To identify the objects in the environment…. the mapping module 500 may use the pretrained object module 505, which may be neural network based, to detect and pixel-wise segment objects such as chairs, tables, books, shoes…, the mapping module 500 may construct a virtual border around the top of a staircase in the map such that the autonomous vacuum 100 does not enter the virtual border to avoid falling down the stairs. As another example, the mapping module 500 may tag a baby with a warning that the baby is more fragile than other people in the environment”]. The acts of avoiding objects, creating a virtual border, and creating warning can be construed as “generating an alarm based on the distance”. Re claim 7, Kumar and Herron further teaches cleaning the at least one floor stain based on the processing of the least one floor stain [¶5 of Kumar, “this disclosure describes an autonomous cleaning system for identifying and automatically cleaning various surface and mess types using automated cleaning structures and components”]. Claim(s) 3-4 is rejected under 35 U.S.C. 103 as being unpatentable over Kumar US 2021/0245081 in view of Herron US 2019/0183310 and in further view of Ross US 2021/0213616. Re claims 3-4, Kumar and Herron teach the invention as discussed above but fail to teach wherein the first pre-trained machine learning model corresponds to an object detection based Convolutional Neural Network (CNN) mode; wherein the floor stain type is extracted from the set of floor stain types using a second pre-trained machine/deep learning model, wherein the second pre-trained machine learning model corresponds to a Support Vector Machine (SVM) classification-based machine learning model. Ross teaches using CNN and SVM [¶84, “neural network 300 may refer to a neural network as depicted in FIG. 3 (i.e., a fully connected network), a convolutional neural network, feed forward neural network, recurrent neural network, deep convolutional neural network, a generative adversarial network, support vector machines, long-short term memory (“LSTM”) networks, auto encoder networks, and/or other conventional neural networks known within the art”]. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Ross with Kumar in order to yield the predicable result of using well known neural networks for training of the robot cleaner. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Correspondence Any inquiry concerning this communication or earlier communications from the examiner should be directed to Carlos A. Rivera whose telephone number is (571)270-5697. The examiner can normally be reached 9AM -4PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Keller can be reached at (571) 272-8548. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. C. A. R. Primary Patent Examiner Art Unit 3723 /C. A. RIVERA/ Primary Patent Examiner, Art Unit 3723
Read full office action

Prosecution Timeline

Aug 07, 2023
Application Filed
Sep 29, 2025
Non-Final Rejection — §101, §103, §112
Dec 16, 2025
Response Filed
Feb 24, 2026
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12599276
Cleaner Station with Fixing Unit
2y 5m to grant Granted Apr 14, 2026
Patent 12600008
SUBSTRATE POLISHING APPARATUS AND METHOD OF POLISHING SUBSTRATE USING THE SAME
2y 5m to grant Granted Apr 14, 2026
Patent 12594645
METHOD AND DISK CARRIER FOR USE IN POLISHING GLASS SUBSTRATE DISKS
2y 5m to grant Granted Apr 07, 2026
Patent 12589344
VACUUM CLEANER AND MOLD DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12588793
ROBOT CLEANER, CONTROL SYSTEM OF ROBOT CLEANER AND CONTROL METHOD OF ROBOT CLEANER
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
77%
Grant Probability
99%
With Interview (+29.2%)
3y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 501 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month