Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 24-DEC-2025 has been entered.
Response to Amendment
The amendment filed 24-DEC-2025 has been entered. Claim 1, 9, and 16 are amended, and 3, 5-8, 11, and 13-15 are previously presented. Claim 2, 4, 10, 12, 17, and 19 are cancelled Claims 1, 3, 5-9, 11, 13-16, 18, and 20 remain pending in the application.
Response to Arguments
Applicant’s arguments, filed 24-DEC-2025, with respect to the rejections of claims 1, 3, 5-9, 11, 13-16, and 18, and 20 under 101 and 103 have been fully considered but are not persuasive. On page 3, Applicant argues that Schluntz fails to disclose determining facial expression information associated with a detected face, let alone determining that facial expression information associated with an individual located within a first area is correlated with a personal privacy risk. The examiner disagrees. The claim is written in a way that makes it unclear what the “that is correlated with a personal privacy risk” is referring to; particularly, this description could be modifying the “the first area”, “an individual”, “facial expression information”, or “image data”. The broadest reasonable interpretation of this limitation, therefore, includes the interpretation that the first area is correlated with a personal privacy risk, and the image recognition software is merely identifying that there is a facial expression information present in the image. Merely recognizing that there is facial expression information present in the image, is taught by Schluntz, as is the first area correlated with a personal privacy risk. The application of the art can be seen in further detain below in the 103 rejection.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim 1, 3, 4, 7, 9, 11, 12, 16, 18, 19 is rejected under 35 U.S.C. 103 as being unpatentable over Schluntz et al. (US 2021/0342479 A1) in view of Nishikawa (US 2018/0338116 A1).
Regarding claim 1, Schluntz teaches: A computer-implemented method for preventing robotic device privacy overreach, the method comprising: training a machine learning model, using training image data and natural language processing data (Paragraph [162], " In some embodiments, an image may include text in multiple regions of the image...The robot 100 may use machine learning based on a neural network trained to detect text."), to identify personal privacy boundaries within one or more environments (Figure 10a; element 1020; Paragraph [162], "After detecting text, the robot 100 may identify a bounding box around a perimeter of the region that includes the text"); deploying the machine learning model to a robotic device (Paragraph [162], " The robot 100 may use machine learning based on a neural network trained to detect text"), wherein the robotic device includes sensors to generate image data and audio data for a first environment of the robotic device (element 722, 724, 720A, 720B, 810); analyzing, using the machine learning model, the image data and audio data to identify a first area associated with a personal privacy boundary (Paragraph [161-162], "Since confidential information is commonly in the form of text, the robot 100 may identify text in images…Similarly, audio can include confidential information such as private conversations... may pause collecting audio when it detects human voice"); wherein identifying the first area associated with the personal privacy boundary includes determining, using natural language processing, that sensitive information is being discussed within the first area (Paragraph [111], "(such as national language processing). In such embodiments, the robots can request that the central system 210 (which may include greater processing capabilities and resources) to instead perform such functions."; Paragraph [6], "To avoid recording private conversations, the robot may detect human voice in the audio and remove the human voice from the audio"), and determining, using image recognition software (Paragraph [187], “In one embodiment, a software module is implemented with a computer program product comprising a computer-readable medium containing computer program code, which can be executed by a computer processor for performing any or all of the steps, operations, or processes described. Further, the functionalities described herein can be performed by a hardware processor or controller located within the robot.”), that the image data includes facial expression information associated with an individual (Paragraph [165], “In some embodiments, the robot 100 may detect a person in an image… Responsive to detecting a person, the robot 100 may identify a bounding box around the person's face”) located within the first area that is correlated with a personal privacy risk (Paragraph [179-180], “Similarly, the controller may generate a bounding box around objects or faces.”); classifying, using the machine learning model and based on the analyzing, the first area as being associated with the personal privacy boundary (Paragraph [179-180]); and programming the robotic device to respond to the first area classified as being associated with the personal privacy boundary, the programming causing the robotic device to dynamically perform one or more operations related to the personal privacy boundary (Paragraph [181])
While Schluntz teaches the claim limitations as stated above, it does not expressly disclose:
wherein the one or more operations includes the robotic device moving to a second area that is out of range of the sensitive information being discussed within the first area associated with the personal privacy boundary
However, Nishikawa teaches: wherein the one or more operations includes the robotic device moving to a second area that is out of range of the sensitive information being discussed within the first area associated with the personal privacy boundary (Figure 3; Paragraph [3])
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify a robot identifying regions of confidential text, face, and speech from audio and image data of Schluntz, to include the evacuating outside of the area where the private conversation is set as taught by Nishikawa. Such modification would have been obvious because such application would have been well within the level of skill of the person having ordinary skill in the art and would have yielded predictable results. The predictable results including: a robot identifying regions of confidential text, face, and speech from audio and image data and then evacuating outside of the area where the private information is.
Regarding claim 3, Schluntz teaches: avoiding collecting any data with respect to the personal privacy boundary associated with the first area (Paragraph [161, 181, 164], "In other embodiments, the robot 100 may stop recording audio completely when human voice is detected.")
While Schluntz teaches the claim limitations as stated above, it does not expressly disclose:
avoiding coming within a predefined distance of the personal privacy boundary associated with the first area
avoiding crossing the personal privacy boundary associated with the first area
and avoiding the personal privacy boundary associated with the first area based on a predetermined schedule
However, Nishikawa teaches: avoiding coming within a predefined distance of the personal privacy boundary associated with the first area (Paragraph [163], "Thereafter, if the telepresence robots 1a and 1b attempt to enter the first conversation-listening area, an evacuation order is transmitted, and entry into the first conversation-listening area is prohibited"); avoiding crossing the personal privacy boundary associated with the first area (Paragraph [165]); and avoiding the personal privacy boundary associated with the first area based on a predetermined schedule (Paragraph [166], "the robot position control section 35 may cause the time intervals between the determination that a telepresence robot is on the outside of the first conversation-listening area and the transmission of the stop order to be different time intervals")
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify a robot identifying regions of confidential text, face, and speech from audio and image data, blurring or blacking out the text or face, or stopping recording the audio, and evacuating outside of the area where the private information is of Schluntz and Nishikawa, to include the evacuating process as taught by Nishikawa. Such modification would have been obvious because such application would have been well within the level of skill of the person having ordinary skill in the art and would have yielded predictable results. The predictable results including: a robot identifying regions of confidential text, face, and speech from audio and image data, blurring or blacking out the text or face, or stopping recording the audio, and evacuating outside and preventing entry of the area where the private information is, including in time intervals.
Regarding claim 4, Schluntz teaches: The computer-implemented method of claim 1, wherein the image data includes facial expression information associated with an individual located within the first area associated with the personal privacy boundary (Paragraph [165])
Regarding claim 7, Schluntz teaches: The computer-implemented method of claim 1, wherein the machine learning model and the robotic device are communicatively connected through a cloud computing network (Figure 2; element 200)
Regarding claim 9, Schluntz teaches: A system for preventing robotic device privacy overreach comprising: a processor; and a computer-readable storage medium communicatively coupled to the processor and storing program instructions which, when executed by the processor, cause the processor to perform a method (Paragraph [187]) comprising: training a machine learning model, using training image data and natural language processing data (Paragraph [162], " In some embodiments, an image may include text in multiple regions of the image...The robot 100 may use machine learning based on a neural network trained to detect text."), to identify personal privacy boundaries within one or more environments (Figure 10a; element 1020; Paragraph [162], "After detecting text, the robot 100 may identify a bounding box around a perimeter of the region that includes the text"); deploying the machine learning model to a robotic device (Paragraph [162], " The robot 100 may use machine learning based on a neural network trained to detect text"), wherein the robotic device includes sensors to generate image data and audio data for a first environment of the robotic device (element 722, 724, 720A, 720B, 810); analyzing, using the machine learning model, the image data and audio data to identify a first area associated with a personal privacy boundary (Paragraph [161-162], "Since confidential information is commonly in the form of text, the robot 100 may identify text in images…Similarly, audio can include confidential information such as private conversations... may pause collecting audio when it detects human voice"); wherein identifying the first area associated with the personal privacy boundary includes determining, using natural language processing, that sensitive information is being discussed within the first area (Paragraph [111], "(such as national language processing). In such embodiments, the robots can request that the central system 210 (which may include greater processing capabilities and resources) to instead perform such functions."; Paragraph [6], "To avoid recording private conversations, the robot may detect human voice in the audio and remove the human voice from the audio"), and determining, using image recognition software (Paragraph [187], “In one embodiment, a software module is implemented with a computer program product comprising a computer-readable medium containing computer program code, which can be executed by a computer processor for performing any or all of the steps, operations, or processes described. Further, the functionalities described herein can be performed by a hardware processor or controller located within the robot.”), that the image data includes facial expression information associated with an individual (Paragraph [165], “In some embodiments, the robot 100 may detect a person in an image… Responsive to detecting a person, the robot 100 may identify a bounding box around the person's face”) located within the first area that is correlated with a personal privacy risk (Paragraph [179-180], “Similarly, the controller may generate a bounding box around objects or faces.”); classifying, using the machine learning model and based on the analyzing, the first area as being associated with the personal privacy boundary (Paragraph [179-180]); and programming the robotic device to respond to the first area classified as being associated with the personal privacy boundary, the programming causing the robotic device to dynamically perform one or more operations related to the personal privacy boundary (Paragraph [181])
While Schluntz teaches the claim limitations as stated above, it does not expressly disclose:
wherein the one or more operations includes the robotic device moving to a second area that is out of range of the sensitive information being discussed within the first area associated with the personal privacy boundary
However, Nishikawa teaches: wherein the one or more operations includes the robotic device moving to a second area that is out of range of the sensitive information being discussed within the first area associated with the personal privacy boundary (Figure 3; Paragraph [3])
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify a robotic system for identifying regions of confidential text, face, and speech from audio and image data of Schluntz, to include the evacuating outside of the area where the private conversation is set as taught by Nishikawa. Such modification would have been obvious because such application would have been well within the level of skill of the person having ordinary skill in the art and would have yielded predictable results. The predictable results including: a robotic system identifying regions of confidential text, face, and speech from audio and image data and then evacuating the robot outside of the area where the private information is.
Regarding claim 11, Schluntz teaches: The system of claim 9, wherein programming the robotic device to respond to the first area associated with the personal privacy boundary further causes the robotic device to dynamically perform an additional operation selected from a group of instructions consisting of: avoiding collecting any data with respect to the personal privacy boundary associated with the first area (Paragraph [161, 181, 164], "In other embodiments, the robot 100 may stop recording audio completely when human voice is detected.")
While Schluntz teaches the claim limitations as stated above, it does not expressly disclose:
avoiding coming within a predefined distance of the personal privacy boundary associated with the first area
avoiding crossing the personal privacy boundary associated with the first area
and avoiding the personal privacy boundary associated with the first area based on a predetermined schedule
However, Nishikawa teaches: avoiding coming within a predefined distance of the personal privacy boundary associated with the first area (Paragraph [163], "Thereafter, if the telepresence robots 1a and 1b attempt to enter the first conversation-listening area, an evacuation order is transmitted, and entry into the first conversation-listening area is prohibited"); avoiding crossing the personal privacy boundary associated with the first area (Paragraph [165]); and avoiding the personal privacy boundary associated with the first area based on a predetermined schedule (Paragraph [166], "the robot position control section 35 may cause the time intervals between the determination that a telepresence robot is on the outside of the first conversation-listening area and the transmission of the stop order to be different time intervals")
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify a robotic system identifying regions of confidential text, face, and speech from audio and image data, blurring or blacking out the text or face, or stopping recording the audio, and evacuating outside of the area where the private information is of Schluntz and Nishikawa, to include the evacuating process as taught by Nishikawa. Such modification would have been obvious because such application would have been well within the level of skill of the person having ordinary skill in the art and would have yielded predictable results. The predictable results including: a robotic system identifying regions of confidential text, face, and speech from audio and image data, blurring or blacking out the text or face, or stopping recording the audio, and evacuating the robot outside and preventing entry of the area where the private information is, including in time intervals.
Regarding claim 12, Schluntz teaches: The system of claim 9, wherein the image data includes facial expression information associated with an individual located within the first area associated with the personal privacy boundary (Paragraph [165])
Regarding claim 16, Schluntz teaches: A computer program product comprising a computer- readable storage medium having program instructions embodied therewith, the program instructions executable by a processor to cause the processor to perform a method for preventing robotic device privacy overreach (Paragraph [187]), the method comprising: training a machine learning model, using training image data and natural language processing data (Paragraph [162], " In some embodiments, an image may include text in multiple regions of the image...The robot 100 may use machine learning based on a neural network trained to detect text."), to identify personal privacy boundaries within one or more environments (Figure 10a; element 1020; Paragraph [162], "After detecting text, the robot 100 may identify a bounding box around a perimeter of the region that includes the text"); deploying the machine learning model to a robotic device (Paragraph [162], " The robot 100 may use machine learning based on a neural network trained to detect text"), wherein the robotic device includes sensors to generate image data and audio data for a first environment of the robotic device (element 722, 724, 720A, 720B, 810); analyzing, using the machine learning model, the image data and audio data to identify a first area associated with a personal privacy boundary (Paragraph [161-162], "Since confidential information is commonly in the form of text, the robot 100 may identify text in images…Similarly, audio can include confidential information such as private conversations... may pause collecting audio when it detects human voice"); wherein identifying the first area associated with the personal privacy boundary includes determining, using natural language processing, that sensitive information is being discussed within the first area (Paragraph [111], "(such as national language processing). In such embodiments, the robots can request that the central system 210 (which may include greater processing capabilities and resources) to instead perform such functions."; Paragraph [6], "To avoid recording private conversations, the robot may detect human voice in the audio and remove the human voice from the audio"), and determining, using image recognition software (Paragraph [187], “In one embodiment, a software module is implemented with a computer program product comprising a computer-readable medium containing computer program code, which can be executed by a computer processor for performing any or all of the steps, operations, or processes described. Further, the functionalities described herein can be performed by a hardware processor or controller located within the robot.”), that the image data includes facial expression information associated with an individual (Paragraph [165], “In some embodiments, the robot 100 may detect a person in an image… Responsive to detecting a person, the robot 100 may identify a bounding box around the person's face”) located within the first area that is correlated with a personal privacy risk (Paragraph [179-180], “Similarly, the controller may generate a bounding box around objects or faces.”); classifying, using the machine learning model and based on the analyzing, the first area as being associated with the personal privacy boundary (Paragraph [179-180]); and programming the robotic device to respond to the first area classified as being associated with the personal privacy boundary, the programming causing the robotic device to dynamically perform one or more operations related to the personal privacy boundary (Paragraph [181])
While Schluntz teaches the claim limitations as stated above, it does not expressly disclose:
wherein the one or more operations includes the robotic device moving to a second area that is out of range of the sensitive information being discussed within the first area associated with the personal privacy boundary
However, Nishikawa teaches: wherein the one or more operations includes the robotic device moving to a second area that is out of range of the sensitive information being discussed within the first area associated with the personal privacy boundary (Figure 3; Paragraph [3])
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify a robotic computer program product for identifying regions of confidential text, face, and speech from audio and image data of Schluntz, to include the evacuating outside of the area where the private conversation is set as taught by Nishikawa. Such modification would have been obvious because such application would have been well within the level of skill of the person having ordinary skill in the art and would have yielded predictable results. The predictable results including: a robotic computer program product identifying regions of confidential text, face, and speech from audio and image data and then evacuating the robot outside of the area where the private information is.
Regarding claim 18, Schluntz teaches: The computer program product of claim 16 wherein programming the robotic device to respond to the first area associated with the personal privacy boundary further causes the robotic device to dynamically perform an additional operation selected from a group of instructions consisting of: avoiding collecting any data with respect to the personal privacy boundary associated with the first area (Paragraph [161, 181, 164], "In other embodiments, the robot 100 may stop recording audio completely when human voice is detected.")
While Schluntz teaches the claim limitations as stated above, it does not expressly disclose:
avoiding coming within a predefined distance of the personal privacy boundary associated with the first area
avoiding crossing the personal privacy boundary associated with the first area
and avoiding the personal privacy boundary associated with the first area based on a predetermined schedule
However, Nishikawa teaches: avoiding coming within a predefined distance of the personal privacy boundary associated with the first area (Paragraph [163], "Thereafter, if the telepresence robots 1a and 1b attempt to enter the first conversation-listening area, an evacuation order is transmitted, and entry into the first conversation-listening area is prohibited"); avoiding crossing the personal privacy boundary associated with the first area (Paragraph [165]); and avoiding the personal privacy boundary associated with the first area based on a predetermined schedule (Paragraph [166], "the robot position control section 35 may cause the time intervals between the determination that a telepresence robot is on the outside of the first conversation-listening area and the transmission of the stop order to be different time intervals")
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify a robotic computer program product identifying regions of confidential text, face, and speech from audio and image data, blurring or blacking out the text or face, or stopping recording the audio, and evacuating outside of the area where the private information is of Schluntz and Nishikawa, to include the evacuating process as taught by Nishikawa. Such modification would have been obvious because such application would have been well within the level of skill of the person having ordinary skill in the art and would have yielded predictable results. The predictable results including: a robotic computer program product identifying regions of confidential text, face, and speech from audio and image data, blurring or blacking out the text or face, or stopping recording the audio, and evacuating the robot outside and preventing entry of the area where the private information is, including in time intervals.
Regarding claim 19, Schluntz teaches: The computer program product of claim 16, wherein the image data includes facial expression information associated with an individual located within the first area associated with the personal privacy boundary (Paragraph [165])
Claim 5, 6, 8, 13, 14, 15, and 20 is rejected under 35 U.S.C. 103 as being unpatentable over Schluntz et al. (US 2021/0342479 A1) in view of Nishikawa (US 2018/0338116 A1) and in further view of Schoessler (US 2022/0288781 A1).
Regarding claim 5, while Schluntz and Nishikawa teach the claim limitations as stated above, specifically the robot that avoids an area based on confidential/private information determined from image and audio data or claim 1, it does not expressly disclose:
identifying, using the machine learning model, a task to be performed by the robotic device at the first environment
modifying, using the machine learning model, the task based on the personal privacy boundary
and instructing, using the machine learning model, the robotic device to perform the modified task
However, Schoessler teaches: The computer-implemented method of claim 1, further comprising: identifying, using the machine learning model, a task to be performed by the robotic device at the first environment (Figure 9, 11; element 910, 1110); modifying, using the machine learning model, the task based on the personal privacy boundary (Figure 11, 10A-10B, 9; element 1180, 930; Paragraph [49], "If a potential collision is predicted, the robot system 100 may adjust the current trajectory"); and instructing, using the machine learning model, the robotic device to perform the modified task (Paragraph [50-51])
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify a robot product identifying regions of confidential text, face, and speech from audio and image data, blurring or blacking out the text or face, or stopping recording the audio, and evacuating outside of the area where the private information is of Schluntz and Nishikawa, to include using machine learning for tasks and adjusting the tasks based on the private regions as taught by Schoessler. Such modification would have been obvious because such application would have been well within the level of skill of the person having ordinary skill in the art and would have yielded predictable results. The predictable results including: a robot using machine learning for tasks and identifying regions of confidential text, face, and speech from audio and image data, obscuring the text or face, or stopping recording the audio, and evacuating the robot outside of the area of private information, and adjusting the tasks based on the areas.
Regarding claim 6, Schluntz teaches: The computer-implemented method of claim 1, wherein the robotic device is selected from a group of robotic devices consisting of: a premises monitoring robotic device (Paragraph [4], "The robot can perform a number of functions and operations in a variety of categories, including but not limited to security operations"); a debris cleaning robotic device (Paragraph [4], "cleaning operations")
While Schluntz and Nishikawa teach the claim limitations as stated above, it does not expressly disclose:
and an unmanned aerial vehicle
However, Schoessler teaches: and an unmanned aerial vehicle (Paragraph [2])
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify a security or cleaning robot identifying regions of confidential text, face, and speech from audio and image data, obscuring the text or face, or stopping recording the audio, and evacuating the robot outside of the area of private information based on the areas of Schluntz and Nishikawa, to include the robot option of also being a UAV as taught by Schoessler. Such modification would have been obvious because such application would have been well within the level of skill of the person having ordinary skill in the art and would have yielded predictable results. The predictable results including: a security, cleaning, or UAV robot identifying regions of confidential text, face, and speech from audio and image data, obscuring the text or face, or stopping recording the audio, and evacuating the robot outside of the area of private information.
Regarding claim 8, while Schluntz and Nishikawa teach the claim limitations as stated above, specifically the robot that avoids an area based on confidential/private information determined from image and audio data or claim 1, it does not expressly disclose:
While Schluntz and Nishikawa teach the claim limitations as stated above, it does not expressly disclose:
the machine learning model uses an ensemble decision tree for classification of images pertaining to the personal privacy boundary
However, Schoessler teaches: machine learning model uses an ensemble decision tree for classification of images pertaining to the personal privacy boundary (Paragraph [72])
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify a robot identifying regions of confidential text, face, and speech from audio and image data, obscuring the text or face, or stopping recording the audio, and evacuating the robot outside of the area of private information, and adjusting the tasks based on the areas of Schluntz and Nishikawa, to include the decision tree learning model as taught by Schoessler. Such modification would have been obvious because such application would have been well within the level of skill of the person having ordinary skill in the art and would have yielded predictable results. The predictable results including: modify a robot using machine learning for tasks and decision tree models for identifying regions of confidential text, face, and speech from audio and image data, obscuring the text or face, or stopping recording the audio, and evacuating the robot outside of the area of private information, and adjusting the tasks based on the areas.
Regarding claim 13, while Schluntz and Nishikawa teach the claim limitations as stated above, specifically the robotic system that avoids an area based on confidential/private information determined from image and audio data or claim 9, it does not expressly disclose:
identifying, using the machine learning model, a task to be performed by the robotic device at the first environment
modifying, using the machine learning model, the task based on the personal privacy boundary
and instructing, using the machine learning model, the robotic device to perform the modified task
However, Schoessler teaches: identifying, using the machine learning model, a task to be performed by the robotic device at the first environment (Figure 9, 11; element 910, 1110); modifying, using the machine learning model, the task based on the personal privacy boundary (Figure 11, 10A-10B, 9; element 1180, 930; Paragraph [49], "If a potential collision is predicted, the robot system 100 may adjust the current trajectory"); and instructing, using the machine learning model, the robotic device to perform the modified task (Paragraph [50-51])
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify a robotic system product identifying regions of confidential text, face, and speech from audio and image data, blurring or blacking out the text or face, or stopping recording the audio, and evacuating outside of the area where the private information is of Schluntz and Nishikawa, to include using machine learning to for tasks and adjusting the tasks based on the private regions as taught by Schoessler. Such modification would have been obvious because such application would have been well within the level of skill of the person having ordinary skill in the art and would have yielded predictable results. The predictable results including: a robotic system using machine learning for tasks identifying regions of confidential text, face, and speech from audio and image data, blurring or blacking out the text or face, or stopping recording the audio, and evacuating the robot outside of the area where the private information is and adjusting the tasks based on the areas.
Regarding claim 14, Schluntz teaches: The system of claim 9, wherein the robotic device is selected from a group of robotic devices consisting of: a premises monitoring robotic device (Paragraph [4], "The robot can perform a number of functions and operations in a variety of categories, including but not limited to security operations"); a debris cleaning robotic device (Paragraph [4], "cleaning operations")
While Schluntz and Nishikawa teach the claim limitations as stated above, it does not expressly disclose:
and an unmanned aerial vehicle
However, Schoessler teaches: and an unmanned aerial vehicle (Paragraph [2])
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify a security or cleaning robotic system using machine learning for tasks and identifying regions of confidential text, face, and speech from audio and image data, obscuring the text or face, or stopping recording the audio, and evacuating the robot outside of the area of private information, and adjusting the tasks based on the areas of Schluntz and Nishikawa, to include the robot option of also being a UAV as taught by Schoessler. Such modification would have been obvious because such application would have been well within the level of skill of the person having ordinary skill in the art and would have yielded predictable results. The predictable results including: a security, cleaning, or UAV robotic system identifying regions of confidential text, face, and speech from audio and image data, obscuring the text or face, or stopping recording the audio, and evacuating the robot outside of the area of private information.
Regarding claim 15, while Schluntz and Nishikawa teach the claim limitations as stated above, specifically the robot that avoids an area based on confidential/private information determined from image and audio data or claim 9, it does not expressly disclose:
While Schluntz and Nishikawa teach the claim limitations as stated above, it does not expressly disclose:
- the machine learning model uses an ensemble decision tree for classification of images pertaining to the personal privacy boundary
However, Schoessler teaches: machine learning model uses an ensemble decision tree for classification of images pertaining to the personal privacy boundary (Paragraph [72])
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify a robotic system identifying regions of confidential text, face, and speech from audio and image data, obscuring the text or face, or stopping recording the audio, and evacuating the robot outside of the area of private information of Schluntz and Nishikawa, to include the decision tree learning model as taught by Schoessler. Such modification would have been obvious because such application would have been well within the level of skill of the person having ordinary skill in the art and would have yielded predictable results. The predictable results including: a robotic system using decision tree models for identifying regions of confidential text, face, and speech from audio and image data, obscuring the text or face, or stopping recording the audio, and evacuating the robot outside of the area of private information.
Regarding claim 20, Schluntz teaches: The computer program product of claim 15, wherein the robotic device is selected from a group of robotic devices consisting of: a premises monitoring robotic device (Paragraph [4], "The robot can perform a number of functions and operations in a variety of categories, including but not limited to security operations"); a debris cleaning robotic device (Paragraph [4], "cleaning operations")
While Schluntz and Nishikawa teach the claim limitations as stated above, it does not expressly disclose:
and an unmanned aerial vehicle
However, Schoessler teaches: and an unmanned aerial vehicle (Paragraph [2])
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify a security or cleaning robotic computer program product using decision tree models for identifying regions of confidential text, face, and speech from audio and image data, obscuring the text or face, or stopping recording the audio, and evacuating the robot outside of the area of private information of Schluntz and Nishikawa, to include the robot option of also being a UAV as taught by Schoessler. Such modification would have been obvious because such application would have been well within the level of skill of the person having ordinary skill in the art and would have yielded predictable results. The predictable results including: a security, cleaning, or UAV robotic computer program product using decision tree models for identifying regions of confidential text, face, and speech from audio and image data, obscuring the text or face, or stopping recording the audio, and evacuating the robot outside of the area of private information.
Conclusion
Other art of interest is Sakai (US 20060184274 A1). It is directed to an autonomously moving robot that recognizes images and humans. Specifically, it teaches security and surveillance functions in Paragraph [21] that can be applicable towards the “premises monitoring robotic device” in claims 6, 14, and 20.
Other art of interest is Bronicki et al. (US 20210374836 A1). It is directed to monitoring retail stores. Specifically, it teaches (Paragraph [0445] For example, this may include an entrance to a retail store (e.g., detecting customer traffic), a stock room (e.g., for tracking inventory and/or product movement), a break room (e.g., detecting employee morale through facial expressions), or various other image data).
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALYSE TRAMANH TRAN whose telephone number is (703)756-5879. The examiner can normally be reached M-F 8:30am-5pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Khoi Tran can be reached on 571-272-6919. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/A.T.T./ Examiner, Art Unit 3656 /KHOI H TRAN/Supervisory Patent Examiner, Art Unit 3656