DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Remark(s)
Applicant's amendment filed November 20th, 2025 has been fully entered and considered. Applicant’s amendment to the claims have overcome each and every prior art rejections previously set forth in the Non-Final Office Action mailed on August 27th, 2025. Regarding the argument to the 101 rejections, the examiner respectfully finds them to be non-persuasive. Regarding the prior art rejection, all new grounds of rejection set forth in the present action were necessitated by Applicants’ claim amendments. Accordingly, this action is made final.
Status of Claims
Claims 1-20 are pending, claims 1-20 have been amended. Claims 1-20 remains rejected.
Response to Argument(s)
In view of the Amendments to independent claims 1, 9 and 17 the previously applied prior art rejections are withdrawn. Applicants’ arguments are not persuasive the examiner finds the same prior art has teachings that still cover the amended features.
101 rejections:
In pages 8-9 of the remarks, the Applicants argue that the features of the claims such as, “in response to the detecting, providing via a network, one or more frames of the video stream including the unknown feature to a server comprising a second machine learning configured to detect second features in addition to the first feature,” “receiving, via the network, a classification label corresponding to the unknown feature from the server,” and “in response to receiving the classification label, transmitting a camera detection notification indicating detection of the unknown feature to a user device corresponding to a user account linked to the camera system” are not practically performable in the human mind to be mental processes.
Moreover, the claims, as a whole, amount to significantly more, that improve the functioning of a computer system that has a camera system loaded with a machine learning model with limited functionality and offloads image recognition tasks for images with unknown features to a more robust machine learning model (“detect second features in addition with the first feature”) that executes remotely from the camera system. With the support bringing in from [0058].
Examiner’s reply:
The examiner respectfully disagrees with the Applicants’ arguments, moreover, for the first part of the Applicants’ arguments, the features of the claims such as, “in response to the detecting, providing via a network, one or more frames of the video stream including the unknown feature to a server comprising a second machine learning configured to detect second features in addition to the first feature,” “receiving, via the network, a classification label corresponding to the unknown feature from the server,” and “in response to receiving the classification label, transmitting a camera detection notification indicating detection of the unknown feature to a user device corresponding to a user account linked to the camera system,” even though are not considered mental processes, however, are considered, under Step 2A Prong 2, to be insignificant extra-solution activities of data/gathering, data providing/transmitting additional elements, and additional element of generic recited at high level of generality machine learning model. These are not indictive of an integration of the judicial exceptions into a practical application nor being considered significantly more. See the 101 rejections below for more details
Regarding the argument, that the claims, as a whole, are directed to significantly more. The examiner respectfully disagrees with the Applicants’ argument, as the Applicants argue that the features of the claims provide improvement to the functioning of a computer system by loading with robust machine learning model performing image recognition tasks. However, the examiner respectfully disagrees and finds these features do not provide improvement to functionality or alter the functionality of a computer in an improved manner, but merely instructions of programming code/a computer program to be executed by a general computer that perform well-known functionalities of a computer of a processor, memory, program performing well-known functions of executing program instructions. Even with a camera system, it’s still a general computer with computer components performing normal well-known functionalities of a computer without any alternation or limit to the specific functioning of a computer in an improved way.
Moreover, the machine learning models are recited at high level of generality without further limiting, in which structure or in which manner, the machine learning model functions to arrive at such outcome. Therefore, the claims recite limitations that include mere attempt of executing judicial exceptions using generic well-known machine learning model, which is not meeting the requirements of the 35 U.S.C. 101.
Importantly, the Applicants are reminded that the claims are construed based on BRI (broadest reasonable interpretation) in light of the specification therefore, the direct teachings of the spec. (such as brining in support of teachings that are not reflected in the claim from paragraph [0058] in page 9 of the Applicant’s remarks) cannot be imported to be the scope of the claim. this improvement is not reflected in the claims.
102 rejections:
In pages 9-11 of the Applicants’ remarks, the Applicants argue that the proposed Mehmood does not teach or suggest the features of the claims:
“in response to the detecting, via a network, one or more frames of the video stream including the unknown feature to a server comprising a second machine learning configured to detect second features in addition to the first feature”
These are new added features to the independent claims, that narrow down the scope of the claims are overcome the previous prior art rejection, new grounds of rejections are set forth below, which the examiner finds the same prior art has teachings that covers the scope of the features.
However, regarding the argument that the proposed Mehmood does not describe that the IoT device detects an unknown feature in a video streams and provides, via a network, frames of the video stream including the unknown feature to a server comprising a second machine learning configured to detect second features in addition to the first feature.
Examiner’s reply:
The examiner respectfully disagrees with the Applicants’ arguments, moreover, the Applicants are reminded that the claims are construed based on BRI in light of the specification. Therefore, the term “unknown features” can be interpreted to be any features that are being processed or detected from the image since, they are to be determined/to be known of more processed information/data. Moreover, first or second machine learnings recited in the claim, are not explicitly distinct from each other, in a sense, that they are must be of different structures or functioning manners, but they are recited at high level of generality to be of the same processing method, therefore, under BRI scope, they can be interpreted to be of the same network or machine learning model at different stages, moreover, they are not recited to be of a neural network or models, merely recited as “machine learnings”. However, the examiner still finds the prior art Mehmood to teach of two different machine learnings such as, shown in figure 4, of a pre-trained model using COCO Dataset (analogous to the recited 1st machine learning model) to process feature maps and default bounding box (analogous to the recited unknown features) and a Single Shot Detector (analogous to the 2nd machine learning) to process of multibox being bounding box regression (analogous to the recited second features) which is in addition to the feature maps and the default bounding box, since these information/data are being fed into the Single Shot Detector, therefore, all of these information/data are used together for the object localization result (analogous to the recited “second features in addition to the first feature”).
The office respectfully encourages the applicant to amend the claims in keeping with the claimed invention disclosure to overcome the prior arts on record.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101
Regarding Independent Claim 1 and its dependent claims 2-8,
Step 1 Analysis: Claim 1 is directed to a method/process, which falls within one of the four statutory categories.
Step 2A Prong 1 Analysis: Claim 1 recites, in part, “” The limitations of “detect an unknown feature in a video stream, detecting the feature in the video stream” as drafted, are processes that, under broadest reasonable interpretation, covers the performance of the limitation in the mind which falls within the “Mental Processes” grouping of abstract ideas. The limitations of:
the human mind, based on BRI (broadest reasonable interpretation), can observe a video stream and detect a feature.
Accordingly, the claim recites an abstract idea.
Step 2A Prong 2 Analysis: This judicial exception is not integrated into a practical application. particular, the claim recites the following additional element(s) –
A computer-implemented method for loading a first machine learning model into a camera system to detect a first feature, comprising:
receiving, by at least one computer processor on a camera system, a command to download the first machine learning model to the camera system, wherein the first machine learning model is configured to;
downloading the machine learning model to the camera system; installing the machine learning model on the camera system; capturing a video stream;
using the first machine learning model;
and in response to receiving the classification label, transmitting a camera detection notification indicating detection of the feature to a user device corresponding to a user account linked to the camera system;
“in response to the detecting, providing via a network, one or more frames of the video stream including the unknown feature to a server comprising a second machine learning configured to detect second features in addition to the first feature,”
“receiving, via the network, a classification label corresponding to the unknown feature from the server,” and
“in response to receiving the classification label, transmitting a camera detection notification indicating detection of the unknown feature to a user device corresponding to a user account linked to the camera system”
The additional elements include generic computer and computer components performing generic functions such as a processing executing a program method. Furthermore, the additional elements includes insignificant extra-solution activities of data gathering such as receiving data/information or a command is also a form of data to download further data here being a machine learning model (this is merely recited to be downloaded hence still a mere recitation of data/information being gather), and more downloading and installing and capturing, transmitting, receiving, providing steps are just mere recitation of data gathering, data transmitting/providing. A mere attempt to implement abstract ideas using generic machine learning model recited at high level of generality is not an indication of an integration of the judicial exceptions into a practical application. The claim as a whole is directed to an abstract idea. Accordingly, these additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Please see MPEP §2106.04.(d).III.C.
Step 2B Analysis: there are no additional elements that amount to significantly more than the judicial exception. Please see MPEP §2106.05. The claim is directed to an abstract idea.
For all of the foregoing reasons, claim 1 does not comply with the requirements of 35 USC 101.
Accordingly, the dependent claims 2-8 do not provide elements that overcome the deficiencies of the independent claim 1. Moreover, claim 2 recites, in part, “setting…the video stream; and transmitting,…automation action” are just further additional element recitation of insignificant extra-solution activity of data gathering and setting data, even though it specifies what the data/information are here being a response command to perform certain execution or a home automation system response command but still mere recitation of data/information being gather. Claim 3 recites, in part, “retraining the machine learning model….to detect the feature” includes mental process steps of modifying one or parameters used by the machine learning model to detect the feature which, by BRI, are steps that a human mind can perform through process of observation and evaluation using pen and a paper to be implemented by a generic machine learning model is just a mere attempt to implement the abstract ideas using generic neural network, the retraining step is an additional element of intended use, which is a well know concept routine in the art to retrain a model, not an improvement. Claim 4 recites, in part, are steps that, by BRI, a human mind can also perform mentally through process of observation and evaluation, the human mind can observe a video stream and detect feature and also present frames such as recited in the claim, the claim further recites additional elements of insignificant extra-solution activities of data gathering of receiving data/information, transmitting data/information and a generic well-known routine intended use in the art of retraining a model, and the machine learning model here is recited at high level of generality to be recited as a mere attempt to implement the abstract ideas using a generic neural network. Claim 5 recites, in part, further insignificant extra-solution activity additional elements of data gathering of the receiving and transmitting steps. Claims 6-8 are claims that recite wherein clauses of general further specification of the abstract ideas and the data/information of which each of them recites on, hence still abstract idea and insignificant additional elements.
Accordingly, the dependent claims 1-8 are not patent eligible under 101.
Regarding claim 9 and its dependent claims 10-16:
The independent claim 9 recites analogous limitations to the independent claim 1 hence, is analyzed under the same approach to be 101 ineligible, the claim 9 further recites additional elements of well-known generic components of a camera system with functions of a computer of computer components of a processor executing instructions store in a memory. The dependent claims 10-16 recites analogous limitations to the dependent claims 2-8 hence, are analyzed under the same approach to be 101 ineligible.
Regarding claim 17 and its dependent claims 18-20:
The independent claim 17 recites analogous limitations to the independent claim 1 hence, is analyzed under the same approach to be 101 ineligible, the claim 17 further recites additional elements of well-known generic components of a camera system with functions of a computer of computer components of a processor executing instructions store in a non-transitory computer-readable medium. The dependent claims 18-20 recites analogous limitations to the dependent claims 2-8 hence, are analyzed under the same approach to be 101 ineligible.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Faisal Mehmood et. al. (“Object Detection Mechanism Based on Deep Learning Algorithm using Embedded IoT Devices for Smart Home Appliances Control in CoT, March 2019, Journal of Ambient Intelligence and Humanized Computing” hereinafter as “Mehmood”).
Regarding claim 1, Mehmood discloses a computer-implemented method for loading a first machine learning model into a camera system to detect a first feature (the system such as shown in FIG. 1 of the camera system using smart home management service and smart control service), comprising: receiving, by at least one computer processor on the camera system, a command to download the first machine learning model to the camera system, wherein the first machine learning model is configured to detect a first feature in a video stream (section 3, 1st par., discloses the service can be used on a phone through a phone app which includes wherein live streaming video of the environment for control, the user can be understood to download the app on the phone [a command to download] wherein the app includes a machine learning model such as shown in FIG. 4 to detect a feature in a video stream, by BRI, covers the scope of the claim); downloading the first machine learning model to the camera system (downloading the app indicates downloading the machine learning model to the camera system including the phone app.); installing the first machine learning model on the camera system (the downloading would install the machine learning model onto the camera system, by BRI, covers the scope of the claim); capturing a video stream (the live streaming of the environment, as discussed previously, indicates capturing a video stream); detecting, using the first machine learning model, an unknown feature in the video stream (the machine learning model of Fig. 4 is to detect the feature in the video stream; the term “unknown features” can be interpreted to be any features that are being processed or detected from the image since, they are to be determined/to be known of more processed information/data, such as shown in figure 4 wherein the features maps [first features] and the default bounding box to be analogous to the unknown feature since they are to be determined to be known and processed to be specific rather than default); in response to the detecting, providing, via a network, one or more frames of the video stream including the unknown feature to a server (figure 4 discloses a Single Shot Detector, which is analogous to second machine learning as claimed, since the SSD is a deep neural network according to section 2, last par., moreover, the input to the SSD includes the information previously proceed including the already mapped “unknown features,” “one or more frames” of the preprocessing step, since the Single Shot Detector processing is sequential to the previous processing therefore, it’s analogous to “in response to the detecting,…”) comprising a second machine learning configured to detect second features in addition to the first feature (the Single Shot Detector, as discussed previously and shown in figure 4, the Single Shot Detector [analogous to the 2nd machine learning] to process of multibox being bounding box regression [analogous to the recited second features] which is in addition to the feature maps and the default bounding box, since these information/data are being fed into the Single Shot Detector, therefore, all of these information/data are used together for the object localization result [analogous to the recited “second features in addition to the first feature”]); receiving, via the network, a classification label corresponding to the unknown feature from the server (as shown in figure 4, the output of the processing is object classification which is analogous to the classification label as claimed, of the process data/information, as previously mapped to be the unknown feature), and in response to receiving the classification label, transmitting a camera detection notification indicating detection of the feature to a user device corresponding to a user account linked to the camera system (section 3, 1st par., discloses the user can communicate with the system through phone hence, it indicates that, the user can observe the live stream with the detected objects in the stream to send control command through the user account linked to the camera system such as shown in FIG. 5; which is in response to the step and the output of the figure 4 hence , is analogous to “in response to receiving, the classification label” as claimed).
Regarding claim 2, Mehmood discloses the computer-implemented method of claim 1, further comprising: setting a response command to perform a home automation action in response to detection of the unknown feature in the video stream (through the detection of Fig. 5, the user can control the home appliances such as shown in FIG. 6 therefore, is analogous to setting a response command to perform a home automation action [controlling the appliances] in response to the result of the detection of FIG. 5; therefore, by BRI, covers the scope of the claim); and transmitting, to a home automation system, the response command to perform the home automation action (then send to home automation system to perform the home automation action to control the appliance such as shown in FIG 6 and FIG. 8).
Regarding claim 3, Mehmood discloses the computer-implemented method of claim 1, wherein the installing further comprises: retraining the machine learning model using one or more images captured by the camera system, thereby modifying one or more parameters used by the first machine learning model to detect the feature (page 7, 1st par., discloses a pre-trained model is used for the classifying objects and then perform the training of the pre-trained model is further discloses by modifying the model to teach the network to detect objects of various sizes based on IoU ratios [modifying the parameters as claimed, by BRI], which indicates a retraining, by BRI, covers the scope of the claim).
Regarding claim 4, Mehmood discloses the computer-implemented method of claim 1, further comprising: retraining the first machine learning model using the classification label to classify the unknown feature (page 7, 1st par., discloses a pre-trained model is used for the classifying objects and then perform the training of the pre-trained model is further discloses by modifying the model to teach the network to detect objects of various sizes based on IoU ratios [modifying the parameters as claimed, by BRI], which indicates a retraining, by BRI, covers the scope of the claim).
Regarding claim 5, Mehmood discloses the computer-implemented method of claim 1, further comprising: receiving data from a second camera system indicating detection of a third feature by a third machine learning model installed on the second camera system (FIG. 6 shows that the home environment can include several areas such as bedroom or other room to be entered the room name to access, therefore, it indicates the use of more than one camera for the detection of the corresponding object [third feature as claimed, by BRI], the machine learning model used for detecting of the corresponding room can be understood to be the third machine learning model and the corresponding cameras to be the second camera system, by BRI, covers the scope of the claim); and transmitting a second response command to perform a second home automation action corresponding to detection of the first feature in the video stream and the third feature (the processing of the second camera is the same which includes transmitting a second response to command a perform of a second home automation action corresponding to the detection result, such as for the analogous limitation in claim 1 above).
Regarding claim 6, Mehmood discloses the computer-implemented method of claim 1, wherein the first feature is an appearance of a predefined object in the video stream (as discussed above in claim 1, the feature is an appearance of a predefined object in the room to be live streamed such as a light in a bedroom as shown in FIG. 6).
Regarding claim 7, Mehmood discloses the computer-implemented method of claim 1, wherein the first feature is an absence of a previously detected object in the video stream (such as shown in FIG. 4 and disclosed in page 7, the system can detect new objects there were not previously classified, in this instance, the feature would be new object which is not the same objects being previously identified, in other words, an absence of the previously detected object in the video stream being the feature as discussed, by BRI, covers the scope of the claim).
Regarding claim 8, Mehmood discloses the computer-implemented method of claim 1, wherein the first feature is recognition of a semantic meaning corresponding to textual characters identified via natural language processing (section 4., 1st par., and FIG. 6 shows that the processing can be the processing of SMS message to control the device which includes textual characters to be processed into command, the machine learning to process so can be understood to be the natural language processing, by BRI, covers the scope of the claim).
Regarding claim 9, Mehmood discloses a camera system, comprising: one or more cameras; one or more memories; and at least one processor each coupled to the one or more cameras and at least one of the memories and configured to perform operations (the system such as shown in FIG. 1 of the camera system using smart home management service and smart control service, using of a computer with computer components), comprising: receiving a command to download a machine learning model to the camera system, wherein the machine learning model is configured to detect a feature in a video stream (section 3, 1st par., discloses the service can be used on a phone through a phone app which includes wherein live streaming video of the environment for control, the user can be understood to download the app on the phone [a command to download] wherein the app includes a machine learning model such as shown in FIG. 4 to detect a feature in a video stream, by BRI, covers the scope of the claim); downloading the machine learning model to the camera system (downloading the app indicates downloading the machine learning model to the camera system including the phone app.); installing the machine learning model on the camera system (the downloading would install the machine learning model onto the camera system, by BRI, covers the scope of the claim); capturing a video stream (the live streaming of the environment, as discussed previously, indicates capturing a video stream); detecting, using the machine learning model, the feature in the video stream (the machine learning model of Fig. 4 is to detect the feature in the video stream); and in response to the detecting, transmitting a camera detection notification indicating detection of the feature to a user device corresponding to a user account linked to the camera system (section 3, 1st par., discloses the user can communicate with the system through phone hence, it indicates that, the user can observe the live stream with the detected objects in the stream to send control command through the user account linked to the camera system such as shown in FIG. 5).
Regarding claim 10, Mehmood discloses the camera system of claim 9, the operations further comprising: setting a response command to perform a home automation action in response to detection of the feature in the video stream (through the detection of Fig. 5, the user can control the home appliances such as shown in FIG. 6 therefore, is analogous to setting a response command to perform a home automation action [controlling the appliances] in response to the result of the detection of FIG. 5; therefore, by BRI, covers the scope of the claim); and transmitting, to a home automation system, the response command to perform the home automation action (then send to home automation system to perform the home automation action to control the appliance such as shown in FIG 6 and FIG. 8).
Regarding claim 11, Mehmood discloses the camera system of claim 9, wherein the installing further comprises: retraining the machine learning model using one or more images captured by the camera system, thereby modifying one or more parameters used by the machine learning model to detect the feature (page 7, 1st par., discloses a pre-trained model is used for the classifying objects and then perform the training of the pre-trained model is further discloses by modifying the model to teach the network to detect objects of various sizes based on IoU ratios [modifying the parameters as claimed, by BRI], which indicates a retraining, by BRI, covers the scope of the claim).
Regarding claim 12, Mehmood discloses the camera system of claim 9, the operations further comprising: presenting one or more frames of the video stream including the unknown feature to a system external to the camera system for classification of the unknown feature (page 7, 1st par., discloses the model can classify objects of unknown objects [unknown feature as claimed, by BRI] of the live streaming video of FIG. 4 and Fig. 5, to aa single shot detector [external system to the camera system for classification of the unknown feature as claimed, by BRI]); receiving a classification label corresponding to the unknown feature from the system external to the camera system (the system, as discussed previously, would output classification result of the object to the camera system as discussed); transmitting, to the user device, a second camera detection notification including the classification label (the camera system, as discussed previously, includes the user phone system to receive the notification including the class for the object classified such as shown in FIG. 4 and FIG. 5); and retraining the machine learning model using the classification label to classify the unknown feature (page 7, 1st par., discloses a pre-trained model is used for the classifying objects and then perform the training of the pre-trained model is further discloses by modifying the model to teach the network to detect objects of various sizes based on IoU ratios [modifying the parameters as claimed, by BRI], which indicates a retraining, by BRI, covers the scope of the claim).
Regarding claim 13, Mehmood discloses The camera system of claim 9, the operations further comprising: receiving data from a second camera system indicating detection of a third feature by a third machine learning model installed on the second camera system (FIG. 6 shows that the home environment can include several areas such as bedroom or other room to be entered the room name to access, therefore, it indicates the use of more than one camera for the detection of the corresponding object [third feature as claimed, by BRI], the machine learning model used for detecting of the corresponding room can be understood to be the third machine learning model and the corresponding cameras to be the second camera system, by BRI, covers the scope of the claim); and transmitting a second response command to perform a second home automation action corresponding to detection of the first feature in the video stream and the third feature (the processing of the second camera is the same which includes transmitting a second response to command a perform of a second home automation action corresponding to the detection result, such as for the analogous limitation in claim 9 above).
Regarding claim 14, Mehmood discloses The camera system of claim 9, wherein the first feature is an appearance of a predefined object in the video stream (as discussed above in claim 9, the feature is an appearance of a predefined object in the room to be live streamed such as a light in a bedroom as shown in FIG. 6).
Regarding claim 15, Mehmood discloses The camera system of claim 9, wherein the first feature is an absence of a previously detected object in the video stream (such as shown in FIG. 4 and disclosed in page 7, the system can detect new objects there were not previously classified, in this instance, the feature would be new object which is not the same objects being previously identified, in other words, an absence of the previously detected object in the video stream being the feature as discussed, by BRI, covers the scope of the claim).
Regarding claim 16, Mehmood discloses The camera system of claim 9, wherein the first feature is recognition of a semantic meaning corresponding to textual characters identified via natural language processing (section 4., 1st par., and FIG. 6 shows that the processing can be the processing of SMS message to control the device which includes textual characters to be processed into command, the machine learning to process so can be understood to be the natural language processing, by BRI, covers the scope of the claim).
Regarding claim 17, Mehmood discloses A non-transitory computer-readable medium having instructions stored thereon that, when executed by at least one computing device, cause the at least one computing device to perform operations (the system such as shown in FIG. 1 of the camera system using smart home management service and smart control service, using of a computer with computer components), comprising: receiving, by a camera system, a command to download the first machine learning model to the camera system, wherein the first machine learning model is configured to detect a first feature in a video stream (section 3, 1st par., discloses the service can be used on a phone through a phone app which includes wherein live streaming video of the environment for control, the user can be understood to download the app on the phone [a command to download] wherein the app includes a machine learning model such as shown in FIG. 4 to detect a feature in a video stream, by BRI, covers the scope of the claim); downloading the first machine learning model to the camera system (downloading the app indicates downloading the machine learning model to the camera system including the phone app.); installing the first machine learning model on the camera system (the downloading would install the machine learning model onto the camera system, by BRI, covers the scope of the claim); capturing a video stream (the live streaming of the environment, as discussed previously, indicates capturing a video stream); detecting, using the first machine learning model, an unknown feature in the video stream (the machine learning model of Fig. 4 is to detect the feature in the video stream; the term “unknown features” can be interpreted to be any features that are being processed or detected from the image since, they are to be determined/to be known of more processed information/data, such as shown in figure 4 wherein the features maps [first features] and the default bounding box to be analogous to the unknown feature since they are to be determined to be known and processed to be specific rather than default); in response to the detecting, providing, via a network, one or more frames of the video stream including the unknown feature to a server (figure 4 discloses a Single Shot Detector, which is analogous to second machine learning as claimed, since the SSD is a deep neural network according to section 2, last par., moreover, the input to the SSD includes the information previously proceed including the already mapped “unknown features,” “one or more frames” of the preprocessing step, since the Single Shot Detector processing is sequential to the previous processing therefore, it’s analogous to “in response to the detecting,…”) comprising a second machine learning configured to detect second features in addition to the first feature (the Single Shot Detector, as discussed previously and shown in figure 4, the Single Shot Detector [analogous to the 2nd machine learning] to process of multibox being bounding box regression [analogous to the recited second features] which is in addition to the feature maps and the default bounding box, since these information/data are being fed into the Single Shot Detector, therefore, all of these information/data are used together for the object localization result [analogous to the recited “second features in addition to the first feature”]); receiving, via the network, a classification label corresponding to the unknown feature from the server (as shown in figure 4, the output of the processing is object classification which is analogous to the classification label as claimed, of the process data/information, as previously mapped to be the unknown feature), and in response to receiving the classification label, transmitting a camera detection notification indicating detection of the feature to a user device corresponding to a user account linked to the camera system (section 3, 1st par., discloses the user can communicate with the system through phone hence, it indicates that, the user can observe the live stream with the detected objects in the stream to send control command through the user account linked to the camera system such as shown in FIG. 5; which is in response to the step and the output of the figure 4 hence , is analogous to “in response to receiving, the classification label” as claimed).
Regarding claim 18, Mehmood discloses t The non-transitory computer-readable medium of claim 17, the operations further comprising: setting a response command to perform a home automation action in response to detection of the unknown feature in the video stream (through the detection of Fig. 5, the user can control the home appliances such as shown in FIG. 6 therefore, is analogous to setting a response command to perform a home automation action [controlling the appliances] in response to the result of the detection of FIG. 5; therefore, by BRI, covers the scope of the claim); and transmitting, to a home automation system, the response command to perform the home automation action (then send to home automation system to perform the home automation action to control the appliance such as shown in FIG 6 and FIG. 8).
Regarding claim 19, Mehmood discloses the camera system of claim 9, wherein the installing further comprises: retraining the first machine learning model using one or more images captured by the camera system, thereby modifying one or more parameters used by the machine learning model to detect the feature (page 7, 1st par., discloses a pre-trained model is used for the classifying objects and then perform the training of the pre-trained model is further discloses by modifying the model to teach the network to detect objects of various sizes based on IoU ratios [modifying the parameters as claimed, by BRI], which indicates a retraining, by BRI, covers the scope of the claim).
Regarding claim 20, Mehmood discloses t The non-transitory computer-readable medium of claim 17, the operations further comprising: retraining the first machine learning model using the classification label to classify the unknown feature (page 7, 1st par., discloses a pre-trained model is used for the classifying objects and then perform the training of the pre-trained model is further discloses by modifying the model to teach the network to detect objects of various sizes based on IoU ratios [modifying the parameters as claimed, by BRI], which indicates a retraining, by BRI, covers the scope of the claim).
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PHUONG HAU CAI whose telephone number is (571)272-9424. The examiner can normally be reached M-F 8:30 am - 5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chineyere Wills-Burns can be reached at (571) 272-9752. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PHUONG HAU CAI/Examiner, Art Unit 2673
/CHINEYERE WILLS-BURNS/Supervisory Patent Examiner, Art Unit 2673