DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
Pending
1-17
35 U.S.C. 102
1-2, 6-10, 14-17
35 U.S.C. 103
3-5, 11-13
Priority
Applicant’s indication of Domestic Benefit/National Stage information based on provisional application 63/547,866 filed 11/09/2023 is acknowledged.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-2, 6-10, 14-17 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Suvarna et al. (US 2019/0061157 A1, “Suvarna”).
Regarding claims 1, 9, and 17: Suvarna teaches: A system, comprising: ([0022] robot with LIDAR turret. calculate both distance to objects and location of robot. [0031] robot processor system)
one or more robots; and ([0022] robot. [0026] control multiple robots)
a server coupled to the one or more robots, the server comprising one or more processors configured to execute computer readable instructions to: ([0026] control application display for robot. [0030] Through Internet, robot controlled, and send info back to remote user. remote server provide commands, process data uploaded from robot. commands sent to server for further processing, then forwarded to robot over internet. [0031]-[0033] robot processor system, remote server, other computing system. Computing system, processor, memory. Processing, storage, communications subsystems)
receive a first query from a first device coupled to the server, the first query comprises a request for one or more maps from a memory associated with the one or more robots; ([0063] robot will follow and explore floor plan and build map. user can select training icon on robot application on user device. robot will generate coverage map showing explored area and upload it to remote server. server will then download coverage map to user application on user device. user now has choice of saving this map to use with virtual boundaries or to cancel and retry exploration. [0065] robot's global map with user-drawn virtual boundaries for robot. virtual boundaries will restrict robot to portion of room it is in, and prevent robot from going to portion or other rooms)
provide a respective map of the one or more maps to the first device from the memory associated with the one or more robots; ([0063] robot will generate coverage map showing explored area and upload it to remote server. server will then download coverage map to user application on user device. user now has choice of saving this map to use with virtual boundaries or to cancel and retry exploration)
receive, from the first device, a set of user selected coordinates corresponding to an area on the respective map; ([0063] download coverage map to user application on user device. user now has choice of saving this map to use with virtual boundaries or to cancel and retry exploration. Once user satisfied with resulting exploratory map, user now uses mobile app to draw virtual boundaries. app supports both insertion and deletion of boundaries. [0064] user uses finger on touch screen to draw virtual boundary on map. portions of map can be labelled, and user can indicate which portions are to be avoided by robot, with application then determining where virtual boundaries need to be drawn to close such portions to robot. user can toggle between cleaning designated area, and marking it off-limits, by tapping area. Each tap would toggle setting between go and no-go. Different rooms on map may be indicated by different colors, or colors or other visual indicators may be used to indicate areas to be cleaned and areas to be avoided. user could indicate virtual boundaries on any device and have it transferred to another device. [0065] robot's global map with user-drawn virtual boundaries for robot. virtual boundaries will restrict robot to portion of room it is in, and prevent robot from going to portion or other rooms. user initiate boundary drawing by touching boundary icon, then drawing lines with user's finger. icon dragged to desired spot, expanded, contracted, and rotated to place it in desired location)
correspond the area with at least one media file via a user input of the media file onto the area; ([0063] Once user satisfied with resulting exploratory map, user now uses mobile app to draw virtual boundaries. app supports both insertion and deletion of boundaries. After desired boundaries have been saved, user can begin cleaning run in which robot honors these virtual boundaries. Honoring virtual boundaries while cleaning dependent on robot localizing in coverage map generated by previous cleaning/training run. [0064] user can indicate which portions are to be avoided by robot, with application then determining where virtual boundaries need to be drawn to close such portions to robot. user can toggle between cleaning designated area, and marking it off-limits, by tapping area. Each tap would toggle setting between go and no-go. Different rooms on map may be indicated by different colors, or colors or other visual indicators may be used to indicate areas to be cleaned and areas to be avoided)
communicate the respective map, the area on the respective map, and the at least one media file, to the one or more robots, the one or more robots are configured to execute at least a portion one or more of the at least one media files upon entering the area; and ([0063] Once user satisfied with resulting exploratory map, user now uses mobile app to draw virtual boundaries. After desired boundaries have been saved, user can begin cleaning run in which robot honors these virtual boundaries. Honoring virtual boundaries while cleaning dependent on robot localizing in coverage map generated by previous cleaning/training run. [0064] user can indicate which portions are to be avoided by robot. user can toggle between cleaning designated area, and marking it off-limits, by tapping area. [0065] robot's global map with user-drawn virtual boundaries for robot. virtual boundaries will restrict robot to portion of room it is in, and prevent robot from going to portion or other rooms)
store the respective map, the user selected coordinates on the respective map, and the at least one media files corresponding to the area on the respective map in a different memory ([0027] electronic system for robot. processor operates program downloaded to memory. bus or other electrical connections. [0031]-[0033] robot processor system, remote server, other computing system. Computing system, processor, memory. Processing, storage, communications subsystems. [0040]-[0043] storage and memory. software programs executed by processors. non-transitory computer-readable storage medium. Storage subsystem. [0063] robot will follow and explore floor plan and build map. Once user satisfied with resulting exploratory map, user now uses mobile app to draw virtual boundaries. [0064] user uses finger on touch screen to draw virtual boundary on map. portions of map can be labelled, and user can indicate which portions are to be avoided by robot. [0065] robot's global map with user-drawn virtual boundaries for robot. virtual boundaries will restrict robot to portion of room it is in, and prevent robot from going to portion or other rooms).
Claims 9 and 17 correspond in scope to claim 1 and are similarly rejected. Claim 9 additionally recites and Suvarna further teaches: A method, comprising ([0090]-[0091] method). Claim 17 additionally recites and Suvarna further teaches: A non-transitory computer readable medium having computer readable instructions stored that when executed by at least one processor configure the at least one processor to ([0040]-[0043] storage and memory. software programs executed by processors. non-transitory computer-readable storage medium. Storage subsystem).
Regarding claims 2 and 10: Suvarna further teaches: The system of claim 1, wherein the at least one media file comprise an interactive media program configured to prompt the user to provide a user input to the robot and execute a response to the user input received by the robot ([0063] robot will follow and explore floor plan and build map. Once user satisfied with resulting exploratory map, user now uses mobile app to draw virtual boundaries. [0064] user uses finger on touch screen to draw virtual boundary on map. portions of map can be labelled, and user can indicate which portions are to be avoided by robot. [0065] robot's global map with user-drawn virtual boundaries for robot. virtual boundaries will restrict robot to portion of room it is in, and prevent robot from going to portion or other rooms). Claim 10 corresponds in scope to claim 2 and is similarly rejected.
Regarding claims 6 and 14: Suvarna further teaches: The system of claim 2, wherein, the one or more media files executed by the one or more robots comprises at least one of (i) an emission of a sound and (ii) a display of visual media, wherein the navigation by the one or more robots is not modified by the execution of the media file ([0026] display provides feedback to user, such as message that the cleaning robot has finished. [0063] robot will follow and explore floor plan and build map. user can select training icon on robot application on user device. robot will generate coverage map showing explored area and upload it to remote server. server will then download coverage map to user application on user device. user now has choice of saving this map to use with virtual boundaries or to cancel and retry exploration. Once user satisfied with resulting exploratory map, user now uses mobile app to draw virtual boundaries. app supports both insertion and deletion of boundaries. After desired boundaries have been saved, user can begin cleaning run in which robot honors these virtual boundaries. Honoring virtual boundaries while cleaning dependent on robot localizing in coverage map generated by previous cleaning/training run). Claim 14 corresponds in scope to claim 6 and is similarly rejected.
Regarding claims 7 and 15: Suvarna further teaches: The system of claim 1, wherein, the media file is provided by a second device different from the device that provides the user selected coordinates ([0031]-[0033] robot processor system, remote server, other computing system. Computing system, processor, memory. Processing, storage, communications subsystems. [0063] user can select training icon on user device. robot will generate coverage map and upload it to remote server. server then download coverage map to user application. Once user satisfied with resulting exploratory map, user now uses mobile app to draw virtual boundaries. app supports both insertion and deletion of boundaries. After desired boundaries have been saved, user can begin cleaning run in which robot honors these virtual boundaries. [0064] user draws virtual boundary on map. application then determining where virtual boundaries need to be drawn to close such portions to robot. user can toggle between cleaning designated area, and marking it off-limits, by tapping area. user could indicate virtual boundaries on any device and have it transferred to another device. [0065] virtual boundaries will restrict robot to portion of room it is in, and prevent robot from going to portion or other rooms. user initiate boundary drawing by touching boundary icon, then drawing lines with user's finger). Claim 15 corresponds in scope to claim 7 and is similarly rejected.
Regarding claims 8 and 16: Suvarna further teaches: The system of claim 1, wherein the one or more processors are further configured to execute the computer readable instructions to: determine that the memory comprises a memory device of a first set of robots, the first set of robots comprising at least one robot; ([0026] control application display for robot. display provides ability to control multiple robots. display provides feedback to user, such as message robot has finished. [0027] electronic system for robot. processor operates program downloaded to memory. bus or other electrical connections. [0030] Through Internet, robot controlled, and send info back to remote user. remote server provide commands, process data uploaded from robot. commands sent to server for further processing, then forwarded to robot over internet. [0031]-[0033] robot processor system, remote server, other computing system. Computing system, processor, memory. Processing, storage, communications subsystems. [0088]-[0089] multiple robots)
transmit a signal to the first set of robots, the signal comprising a request for the respective map to be uploaded to the server from the memory device of the first set of robots; and ([0063], [0088] multiple robots are used. robot can transmit each room of the map as it is generated to the second robot, so the second robot need not do mapping and need not include the processing needed for mapping. This transmission can be direct, by way of a user device, or by uploading to a remote server which then downloads to the second robot. Multiple maps for multiple floors or floor plans can be shared. [0089] With multiple robots, each can be assigned a portion of the area to be cleaned, with the areas handled by other robots being blocked off with virtual boundaries. Thus, different virtual boundaries would be provided to each robot)
store the respective map in the second memory device of the server prior to providing the respective map to the first device ([0026] control multiple robots. [0027], [0030]-[0033], [0063], [0088]-[0089]). Claim 16 corresponds in scope to claim 8 and is similarly rejected.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 3-5 and 11-13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Suvarna et al. (US 2019/0061157 A1, “Suvarna”) and further in view of Collis et al. (US 2023/0114258 A1, “Collis”).
Regarding claims 3 and 11: Suvarna further teaches: The system of claim 2. However, Suvarna does not explicitly teach: wherein, the interactive media program comprises an item finding program, the user input comprises a selection of one or more items, and the response to the user input comprises at least one of (i) an indication of a location of the one or more items on the map and (ii) navigating the robot to the location of the one or more items via the map.
Collis teaches: wherein, the interactive media program comprises an item finding program, the user input comprises a selection of one or more items, and the response to the user input comprises at least one of (i) an indication of a location of the one or more items on the map and (ii) navigating the robot to the location of the one or more items via the map ([0185]-[0193] data from user device received at transceiver of robot. comprises identifier for object in list user wishes to locate in environment. identify set of keys as object user wishes to locate. in response to receipt of control data, robot and sensors are operated to search environment to determine location of object in environment. robot transmit indication of determined location of identified object in environment to user device. robot transmits generated list of objects to user device. robot transmits set of parameters representative of environment of robot to electronic user device. last known location for object in list generated. list may comprise object and identifier as ‘keys’ and may list last known location as ‘kitchen table’. operating robot to move proximate to last known location of identified object. operating robot to move proximate to ‘kitchen table’, which last known location of ‘keys’. operating robot to move identified object to given location. location may be different location from last known location within environment. operating robot to move ‘keys’ to ‘the key hook’. location may be location of user within environment, such step comprises operating robot to bring ‘keys’ to user. control data comprise new, given location for object. control data may therefore specify ‘keys’ should have new location of ‘the key hook’. robot operated to move ‘keys’ to ‘the key hook’).
Suvarna and Collis are analogous art to the claimed invention since they are from the similar field of mobile robot controls and user interaction. It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the invention of Suvarna with the aspects of Collis to create, with a reasonable expectation for success, a robot system with an item finding program, wherein the user input comprises a selection of items, and the response to the user input comprises at least one of (i) an indication of a location of the one or more items on the map and (ii) navigating the robot to the location of the one or more items via the map. The motivation for modification would have been to allow a robot to adopt a more systematic navigation pattern by viewing, understanding, and recognizing the area around it, which means more systematic navigation patterns can be achieved, creating a more effective robot (Collis, [0006]), while also enabling a user to find objects more quickly, improving the user experience (Collis, [0073]). This motivation for modification similarly applies to claims that depend upon claim 3, including claims 4 and 5.
Claim 11 corresponds in scope to claim 3 and is similarly rejected. The motivation for modification is the same as that recited for claim 3. This motivation for modification similarly applies to claims that depend upon claim 11, including claims 12 and 13.
Regarding claims 4 and 12: Suvarna-Collis further teach: The system of claim 3, wherein the one or more processors are further configured to execute the computer readable instructions to: receive a plurality of images from the one or more robots; identify one or more items within the images; and localize the one or more items on the respective map based at least in part upon a location of where the images were taken on the map and a location of the one or more items in the plurality of images (Suvarna: ([0069] objects sensed by imaging with camera. [0090] robot has a camera and can provide images or video to the user to indicate virtual boundary areas. robot could map areas and identify them, such as by image recognition (dishwasher means it's kitchen), or by prompting the user to label areas from the map and/or images. object or areas could be identified on a map, and a user could tap them to place off limits, and double tap to remove the virtual boundaries (or vice-versa). The robot would then draw a virtual boundary around the indicated object or area. Collis: ([0185]-[0193] data from user device received at transceiver of robot. comprises identifier for object in list user wishes to locate in environment. robot and sensors are operated to search environment to determine location of object in environment. robot transmit indication of determined location of identified object in environment to user device. list may comprise object and identifier as ‘keys’ and may list last known location as ‘kitchen table’. operating robot to move proximate to last known location of identified object. operating robot to move proximate to ‘kitchen table’, which last known location of ‘keys’. operating robot to move identified object to given location). Claim 12 corresponds in scope to claim 4 and is similarly rejected.
Regarding claims 5 and 13: Suvarna-Collis further teach: The system of claim 4, wherein the one or more processors are further configured to execute the computer readable instructions to: receive a second query from the device, the second query comprises a request for one or more items selected via the device; and provide an indication of the location for the selected one or more items via at least one of navigating to the location or displaying the computer readable map with the localization information corresponding to each of the selected one or more items (Suvarna: [0069] objects sensed by imaging with camera. [0090] robot has a camera and can provide images or video to the user to indicate virtual boundary areas. robot could map areas and identify them, such as by image recognition (dishwasher means it's kitchen), or by prompting the user to label areas from the map and/or images. object or areas could be identified on a map, and a user could tap them to place off limits, and double tap to remove the virtual boundaries (or vice-versa). The robot would then draw a virtual boundary around the indicated object or area. Collis: [0185]-[0193] data from user device received at transceiver of robot. comprises identifier for object in list user wishes to locate in environment. identify set of keys as object user wishes to locate. in response to receipt of control data, robot and sensors are operated to search environment to determine location of object in environment. robot transmit indication of determined location of identified object in environment to user device. robot transmits generated list of objects to user device. robot transmits set of parameters representative of environment of robot to electronic user device. last known location for object in list generated. list may comprise object and identifier as ‘keys’ and may list last known location as ‘kitchen table’. operating robot to move proximate to last known location of identified object. operating robot to move proximate to ‘kitchen table’, which last known location of ‘keys’. operating robot to move identified object to given location. location may be different location from last known location within environment. operating robot to move ‘keys’ to ‘the key hook’. location may be location of user within environment, such step comprises operating robot to bring ‘keys’ to user. control data comprise new, given location for object. control data may therefore specify ‘keys’ should have new location of ‘the key hook’. robot operated to move ‘keys’ to ‘the key hook’). Claim 13 corresponds in scope to claim 5 and is similarly rejected.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MADISON B EMMETT whose telephone number is (303)297-4231. The examiner can normally be reached Monday - Friday 9:00 - 5:00 ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tommy Worden can be reached at (571)272-4876. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MADISON B EMMETT/Examiner, Art Unit 3658