DETAILED ACTIONS
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant’s claim this application being a child application of Provisional Application 63/379,087, filed on October 11, 2022.
Information Disclosure Statement
The information disclosure statement (“IDS”) filed on 03/16/2024, 01/30/2024, 11/22/2024, 11/22/2023, 09/11/2023 were reviewed.
Drawings
The 5-page drawings have been considered and placed on record in the file.
Status of Claims
Claims 1-8, 11-15, and new claims 16-22 are pending. Claims 9-10 are canceled.
Response to Amendment
The amendment filed 12/16/2025 has been entered in full. Claims 1-8, 11-15, and new claims 16-22 remain pending in the application. Applicant’s amendment to the Claims have overcome each and every objection, 112(b) and 101 rejections previously set forth in the Non-Final Office Action mailed September 19th, 2025.
Response to Arguments
Applicant’s arguments with respect to claims 1 and 11 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 11-20 and 22 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claims contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. In Claim 11, the limitation requires “delineating the object with a first delineating feature on a visual display; tracking movement of the object within the field of view; detecting entry of the object into the zone; and upon detecting entry of the object into the zone, delineating the object with a second delineating feature different from the first and initiating an alarm”. The Specification only discloses delineating the pillow as a warning status because it is in proximate of the head of the subject and delineating a different pillow that is closer to the feet of the subject. The Specification does not suggests delineates either of the pillow with a second delineating feature different from the first upon its entry into the zone in proximity to the head. The Specification only discloses delineating two different objects differently rather than delineating the same object in two different ways upon entering the zone in proximity to the head. Claims 12-20 and 22 are rejected for being dependent upon claim 11.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-3, 5-8, and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Doken (US 2023/0237797 A1, filed on 01/27/2022), hereinafter referred to as Doken, in view of Li et al., (CN104091408A, published 10/08/2014, Espacenet translation), hereinafter referred to as Li.
CLAIM 1
Doken discloses a method of monitoring a subject (Doken, [0004], “systems and methods are provided herein for a smart home management system (SHMS) configured to identify, using a sensor, a location of an object in an environment, and determine a classification of the object”, [0035], “The SHMS may monitor and store any suitable type of user information associated with user 102, and may reference the particular user profile or account to determine an identity of a human (e.g., user 102) in environment 100”, [0049], “Each of user 102 and 204 may have a user profile with the SHMS specifying respective characteristics of the users“, “user 204 may not be wearing or carrying a particular device, and instead may be monitored via a baby camera or other camera positioned at one or more locations in environment 200”), the method comprising:
acquiring, from a non-contact monitoring system, a video signal having a field of view (Doken, [0040], “the augmented reality scene may comprise augmenting live video or other live-captured moving image, three-dimensional image, panoramic or other immersive image (e.g., a 360-degree image or series of live images surrounding a particular viewpoint), live VR or AR renderings, an image received from a content source, and/or any other visual media or any combination thereof.”);
extracting depth measurements from the video signal (Doken, [0032], “In some embodiments, a Cartesian coordinate plane is used to identify a position of an object in environment 100, with the position recorded as (X, Y) coordinates on the plane. In some embodiments, the coordinates may include a coordinate in the Z-axis, to identify the position of each identified object in 3D space, based on images captured using 3D sensors and any other suitable depth-sensing technology.”);
detecting a subject with the non-contact monitoring system (Doken, [0035], “if a human is detected in environment 100 but the SHMS is not able to identify a particular user profile or account associated with the detected human, biometric data or facial recognition techniques may be employed to determine whether the user is frequently located in environment 100 (e.g., lives at a house corresponding to environment 100) or is a guest that perhaps does not frequently visit environment 100 (or frequently visits environment 100)”) utilizing the depth measurements (Doken, [0036], “the SHMS may determine a current location of user 102 within environment 100 based on any suitable technique, e.g., wireless signal characteristics of one or more of the user devices and/or networking equipment 112, sensor data (e.g., captured images by one or more of the user devices or one or more cameras positioned at various locations in environment 100; audio signals captured by a microphone of a user device, IoT device or home assistant and indicating a location of a user; or any other suitable sensor data or any combination thereof), or any other suitable technique or any combination thereof”, [0032], “the coordinates may include a coordinate in the Z-axis, to identify the position of each identified object in 3D space, based on images captured using 3D sensors and any other suitable depth-sensing technology.”);
identifying a region of interest of the subject (Doken, [0057], “As shown in data structure 601, the SHMS may classify individuals and their devices, and may store one or more profiles for users in column 602, e.g., a profile for a male adult, a female adult, a teenager, a toddler, and an elderly male, or any other suitable humans or pets, or any combination thereof, determined to be living in or frequently present at a particular environment”, [0004], “The SHMS may identify a human in proximity to the object, and determine an identity of the human.”) (Doken, [0011], “identifying the human in proximity with the object comprises determining a current location in the environment of the user device associated with a user profile of the human, and determining that the current location in the environment of the user device associated with the user profile is proximate to the object associated with the potentially hazardous condition.”, the SHMS also identifies the zone or area that is in proximity of the human to determine if there is a potential hazardous object, [0100], “detects a vulnerable user close to, and a human body part (e.g., hand, arm, leg, etc.) next to, the hazardous object within the field-of-view of the user, the SHMS may immediately jump to a hazard AR warning level of a high urgency”, [0034], “the SHMS may generate a data structure for a current field of view of the user, including object identifiers associated with objects in environment 100, and such data structure may include coordinates representing the position of the field of view and objects in environment 100. A field of view may be understood as a portion of environment 100 that is presented to user 102 at a given time via a display (e.g., an angle in a 360-degree sphere environment) when the user is at a particular location in environment 100 and has oriented a user device in a particular direction in environment 100”);
detecting an object in the field of view (Doken, [0034], “the SHMS may generate a data structure for a current field of view of the user, including object identifiers associated with objects in environment 100, and such data structure may include coordinates representing the position of the field of view and objects in environment 100. A field of view may be understood as a portion of environment 100 that is presented to user 102 at a given time via a display (e.g., an angle in a 360-degree sphere environment) when the user is at a particular location in environment 100 and has oriented a user device in a particular direction in environment 100”) with the non-contact monitoring system (Doken, [0004], “a smart home management system (SHMS) configured to identify, using a sensor, a location of an object in an environment, and determine a classification of the object. The SHMS may identify a human in proximity to the object, and determine an identity of the human. The SHMS may determine that a hazardous condition may occur, based on a combination of the location of the object, the classification of the object, and the identity of the human in proximity to the object.”, [0011], “the SHMS is further configured to generate a plurality of user profiles, wherein at least one of the user profiles is associated with a respective user device, and identifying the human in proximity with the object comprises determining a current location in the environment of the user device associated with a user profile of the human, and determining that the current location in the environment of the user device associated with the user profile is proximate to the object associated with the potentially hazardous condition”) utilizing the depth measurements (Doken, [0032], “the coordinates may include a coordinate in the Z-axis, to identify the position of each identified object in 3D space, based on images captured using 3D sensors and any other suitable depth-sensing technology.”);
tracking movement of the object over time (Doken, [0054], “ the SHMS may, e.g., based on an initial walkthrough the SHMS requests the user to perform, capture images of various objects in a particular environment, and monitor and update the locations and types of objects in the environment based on sensor data over time. For example, data structure 400 may store an indication that an object of electric cord 402 is located in a second floor bedroom 404, along with one or more images 406, 408 of different views or portions or orientations of electric chord 402. In some embodiments, data structure 400 may store one or more augmented reality scenes or notifications associated with potentially hazardous conditions associated with a particular object, or any other suitable information, or any combination thereof.”) utilizing the depth measurements (Doken, [0032], “the coordinates may include a coordinate in the Z-axis, to identify the position of each identified object in 3D space, based on images captured using 3D sensors and any other suitable depth-sensing technology.”);
determining, with the non-contact monitoring system, whether the detected object satisfies one or more criteria designating the detected object as a potential hazard to the subject (Doken, [0004], “The SHMS may determine that a hazardous condition may occur, based on a combination of the location of the object, the classification of the object, and the identity of the human in proximity to the object. In response to determining that the hazardous condition may occur, an augmented reality scene associated with the potentially hazardous condition associated to the object may be generated for presentation at a user device.”, [0062], “to determine common hazards for certain ages or age groups, and tailor warnings provided to such toddler (and/or male adult and female adult indicated in column 604, which may correspond to parents of the toddler) in the particular environment based on the referenced information”, [0041], “The SHMS may pull such data from a cloud/knowledge base system, e.g., server 1004 and/or database 1005 and/or any other suitable data storage or any combination thereof. Such features may enable users living or frequently present in environment 100 to be reminded of a past accident or hazardous condition that should be avoided, as well as to enable guests visiting environment 100 to be precisely warned to historical accidents, since they may not be familiar with the location and/or objects associated with the potentially hazardous location. For example, certain potentially hazardous conditions may be unique to a particular layout and/or combination of objects within environment 100, and thus it may be particularly beneficial to warn guests of such potentially hazardous conditions give their likely unfamiliarity with such conditions.”, [0037], “the SHMS detecting a toddler being within the same room as an identified object (e.g., a knife) may trigger a warning to parents of the toddler and/or the toddler, whereas the SHMS detecting an adult, such as user 102, may set a lower proximity threshold (e.g., do not provide a warning until the user is within five feet of the identified object)”, the proximity threshold is an example of a criteria); and
upon determining the detected object satisfies the one or more criteria, initiating an alarm (Doken, [0059], “the SHMS may determine a type of alert, or whether to provide an alert at all, for a potentially hazardous condition based at least in part on the vulnerability indication for a particular user. For example, the user profile associated with the toddler or elderly male may be provided an alert in a situation in which the male adult or female adult may not be provided an alert, and/or may be provided with a more urgent alert than the male adult or female adult may be provided with in the same situation, and/or non-vulnerable users may be provided with an alert concerning the vulnerable user.”, [0066], “just the mere fact that a potentially hazardous object is determined by the SHMS to be close to a user may be enough to generate an alert to other users. For instance, an adult may be issued a warning (e.g., including an augmented reality scene or any other suitable warning) by the SHMS that the toddler is nearby or within the field of view of a bleach bottle (e.g., within one foot, or within any suitable distance), since the toddler may try to open the bottle and waiting until the bottle is open may not provide a sufficient notice period to enable the adult to prevent an accident or correct the situation. As another example, if the SHMS determines an elderly individual is nearby or within the field of view of a knife, an alert (e.g., including an augmented reality scene or any other suitable warning) may be generated and provided to a device of the elderly individual and one or more other users within the household. In some embodiments, if the non-vulnerable user receiving the alert is not sure about the severity of the situation, such user may request the SHMS to send a video feed from the vulnerable user to review the live feed and make an assessment as to what action to take.”).
Doken does not explicitly disclose identifying a region of interest of the subject, comprising a head or a face of the subject.
However, Li teaches identifying a region of interest of the subject, comprising a head or a face of the subject (Li, [0011], “The thermal infrared image recognition processing module detects the position of the nostril or the oral cavity and the position of the head region of the infant, and discriminates the sleeping position of the infant's head according to the detected position of the nostril or the oral cavity relative to the head region”).
Doken and Li are both considered to be analogous to the claimed invention because they are in the same field of surveillance or monitoring. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method as taught by Doken to incorporate the teachings of Li identifying a region of interest of the subject, comprising a head or a face of the subject. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. The motivation for the proposed modification would have been for infant safety monitoring (Li, [007]).
CLAIM 2
The combination of Doken in view of Li discloses the method of claim 1 (Doken, [0004], “systems and methods are provided herein for a smart home management system (SHMS) configured to identify, using a sensor, a location of an object in an environment, and determine a classification of the object”, [0035], “The SHMS may monitor and store any suitable type of user information associated with user 102, and may reference the particular user profile or account to determine an identity of a human (e.g., user 102) in environment 100”, [0049], “Each of user 102 and 204 may have a user profile with the SHMS specifying respective characteristics of the users“, “user 204 may not be wearing or carrying a particular device, and instead may be monitored via a baby camera or other camera positioned at one or more locations in environment 200”), wherein determining whether the detected object satisfies the one or more criteria designating the detected object as a potential hazard to the subject (Doken, [0004], “The SHMS may determine that a hazardous condition may occur, based on a combination of the location of the object, the classification of the object, and the identity of the human in proximity to the object. In response to determining that the hazardous condition may occur, an augmented reality scene associated with the potentially hazardous condition associated to the object may be generated for presentation at a user device.”, [0006], “determining the identity of the human comprises identifying a characteristic of the identified human, and determining that the hazardous condition may occur, based on the combination, comprises determining that a database stores an indication that the combination of the identified characteristic and the object is indicative that the hazardous condition may occur. In some embodiments, the characteristic corresponds to one or more of an age of the human, and a level of distraction of the human.”), the criteria is based on the combination of the characteristic of the human and the object determined using a database, [0041], “The SHMS may pull such data from a cloud/knowledge base system, e.g., server 1004 and/or database 1005 and/or any other suitable data storage or any combination thereof. Such features may enable users living or frequently present in environment 100 to be reminded of a past accident or hazardous condition that should be avoided, as well as to enable guests visiting environment 100 to be precisely warned to historical accidents, since they may not be familiar with the location and/or objects associated with the potentially hazardous location. For example, certain potentially hazardous conditions may be unique to a particular layout and/or combination of objects within environment 100, and thus it may be particularly beneficial to warn guests of such potentially hazardous conditions give their likely unfamiliarity with such conditions.”) comprises: utilizing artificial intelligence (AI) to recognize the detected object (Doken, [0033], “the SHMS may utilize one or more machine learning models to localize and/or classify objects in environment 100.”).
CLAIM 3
The combination of Doken in view of Li discloses the method of claim 1 (Doken, [0004], “systems and methods are provided herein for a smart home management system (SHMS) configured to identify, using a sensor, a location of an object in an environment, and determine a classification of the object”, [0035], “The SHMS may monitor and store any suitable type of user information associated with user 102, and may reference the particular user profile or account to determine an identity of a human (e.g., user 102) in environment 100”, [0049], “Each of user 102 and 204 may have a user profile with the SHMS specifying respective characteristics of the users“, “user 204 may not be wearing or carrying a particular device, and instead may be monitored via a baby camera or other camera positioned at one or more locations in environment 200”), wherein the one or more criteria designating the detected object as a potential hazard to the subject (Doken, [0004], “The SHMS may determine that a hazardous condition may occur, based on a combination of the location of the object, the classification of the object, and the identity of the human in proximity to the object. In response to determining that the hazardous condition may occur, an augmented reality scene associated with the potentially hazardous condition associated to the object may be generated for presentation at a user device.”, [0006], “determining the identity of the human comprises identifying a characteristic of the identified human, and determining that the hazardous condition may occur, based on the combination, comprises determining that a database stores an indication that the combination of the identified characteristic and the object is indicative that the hazardous condition may occur. In some embodiments, the characteristic corresponds to one or more of an age of the human, and a level of distraction of the human.”), the criteria is based on the combination of the characteristic of the human and the object determined using a database, [0041], “The SHMS may pull such data from a cloud/knowledge base system, e.g., server 1004 and/or database 1005 and/or any other suitable data storage or any combination thereof. Such features may enable users living or frequently present in environment 100 to be reminded of a past accident or hazardous condition that should be avoided, as well as to enable guests visiting environment 100 to be precisely warned to historical accidents, since they may not be familiar with the location and/or objects associated with the potentially hazardous location. For example, certain potentially hazardous conditions may be unique to a particular layout and/or combination of objects within environment 100, and thus it may be particularly beneficial to warn guests of such potentially hazardous conditions give their likely unfamiliarity with such conditions.”) comprises: a proximity of the detected object to the zone (Doken, [0011], “identifying the human in proximity with the object comprises determining a current location in the environment of the user device associated with a user profile of the human, and determining that the current location in the environment of the user device associated with the user profile is proximate to the object associated with the potentially hazardous condition.”, the SHMS also identifies the zone or area that is in proximity of the human to determine if there is a potential hazardous object, [0100], “detects a vulnerable user close to, and a human body part (e.g., hand, arm, leg, etc.) next to, the hazardous object within the field-of-view of the user, the SHMS may immediately jump to a hazard AR warning level of a high urgency”).
CLAIM 5
The combination of Doken in view of Li discloses the method of claim 1 (Doken, [0004], “systems and methods are provided herein for a smart home management system (SHMS) configured to identify, using a sensor, a location of an object in an environment, and determine a classification of the object”, [0035], “The SHMS may monitor and store any suitable type of user information associated with user 102, and may reference the particular user profile or account to determine an identity of a human (e.g., user 102) in environment 100”, [0049], “Each of user 102 and 204 may have a user profile with the SHMS specifying respective characteristics of the users“, “user 204 may not be wearing or carrying a particular device, and instead may be monitored via a baby camera or other camera positioned at one or more locations in environment 200”), further comprising receiving an override from a user designating the detected object as non-hazardous (Doken, [0043], “user 102 may be permitted to remove or override a determination by the SHMS of one or more objects as being associated with a potentially hazardous condition, and/or mark one or more objects as being associated with a potentially hazardous condition”) and wherein determining whether the detected object satisfies one or more criteria (Doken, [0004], “The SHMS may determine that a hazardous condition may occur, based on a combination of the location of the object, the classification of the object, and the identity of the human in proximity to the object. In response to determining that the hazardous condition may occur, an augmented reality scene associated with the potentially hazardous condition associated to the object may be generated for presentation at a user device.”, [0006], “determining the identity of the human comprises identifying a characteristic of the identified human, and determining that the hazardous condition may occur, based on the combination, comprises determining that a database stores an indication that the combination of the identified characteristic and the object is indicative that the hazardous condition may occur. In some embodiments, the characteristic corresponds to one or more of an age of the human, and a level of distraction of the human.”), the criteria is based on the combination of the characteristic of the human and the object determined using a database, [0041], “The SHMS may pull such data from a cloud/knowledge base system, e.g., server 1004 and/or database 1005 and/or any other suitable data storage or any combination thereof. Such features may enable users living or frequently present in environment 100 to be reminded of a past accident or hazardous condition that should be avoided, as well as to enable guests visiting environment 100 to be precisely warned to historical accidents, since they may not be familiar with the location and/or objects associated with the potentially hazardous location. For example, certain potentially hazardous conditions may be unique to a particular layout and/or combination of objects within environment 100, and thus it may be particularly beneficial to warn guests of such potentially hazardous conditions give their likely unfamiliarity with such conditions.”) comprises: applying the override (Doken, [0043], “user 102 may be permitted to remove or override a determination by the SHMS of one or more objects as being associated with a potentially hazardous condition, and/or mark one or more objects as being associated with a potentially hazardous condition. In some embodiments, the SHMS may provide the user with a temporary override option (e.g., to suspend warning for a particular object or class of objects during a particular user session or time period), or a permanent override option, e.g., an option such as “Never warn me again” with respect to a particular object or class of objects.”).
CLAIM 6
The combination of Doken in view of Li discloses the method of claim 5 (Doken, [0004], “systems and methods are provided herein for a smart home management system (SHMS) configured to identify, using a sensor, a location of an object in an environment, and determine a classification of the object”, [0035], “The SHMS may monitor and store any suitable type of user information associated with user 102, and may reference the particular user profile or account to determine an identity of a human (e.g., user 102) in environment 100”, [0049], “Each of user 102 and 204 may have a user profile with the SHMS specifying respective characteristics of the users“, “user 204 may not be wearing or carrying a particular device, and instead may be monitored via a baby camera or other camera positioned at one or more locations in environment 200”), further comprising utilizing AI to identify non-hazardous objects based on the override (Doken, [0072], “machine learning model 800 may be trained using training data set 804 comprising indications of historical hazardous conditions and actions taken with respect to users having certain characteristics. For example, training data set 804 may comprise one or more of the examples as described in connection with FIGS. 1-7, which may be used as a data point for machine learning model 800, or any other suitable examples, or any combination thereof. For example, training data may be labeled as rising to the level of providing a warning or not rising to the level of providing a warning, and/or the training data may indicate a manner of providing the warning and/or to how many devices the warning is provided. In some embodiments, the training examples may be labeled based on feedback received from users as to whether a warning provided in a particular scenario was adequate or not.”, [0043], “the SHMS may provide the user with a temporary override option (e.g., to suspend warning for a particular object or class of objects during a particular user session or time period), or a permanent override option, e.g., an option such as “Never warn me again” with respect to a particular object or class of objects.”).
CLAIM 7
The combination of Doken in view of Li discloses the method of claim 1 (Doken, [0004], “systems and methods are provided herein for a smart home management system (SHMS) configured to identify, using a sensor, a location of an object in an environment, and determine a classification of the object”, [0035], “The SHMS may monitor and store any suitable type of user information associated with user 102, and may reference the particular user profile or account to determine an identity of a human (e.g., user 102) in environment 100”, [0049], “Each of user 102 and 204 may have a user profile with the SHMS specifying respective characteristics of the users“, “user 204 may not be wearing or carrying a particular device, and instead may be monitored via a baby camera or other camera positioned at one or more locations in environment 200”), wherein initiating an alarm (Doken, [0059], “the SHMS may determine a type of alert, or whether to provide an alert at all, for a potentially hazardous condition based at least in part on the vulnerability indication for a particular user. For example, the user profile associated with the toddler or elderly male may be provided an alert in a situation in which the male adult or female adult may not be provided an alert, and/or may be provided with a more urgent alert than the male adult or female adult may be provided with in the same situation, and/or non-vulnerable users may be provided with an alert concerning the vulnerable user.”, [0066], “just the mere fact that a potentially hazardous object is determined by the SHMS to be close to a user may be enough to generate an alert to other users. For instance, an adult may be issued a warning (e.g., including an augmented reality scene or any other suitable warning) by the SHMS that the toddler is nearby or within the field of view of a bleach bottle (e.g., within one foot, or within any suitable distance), since the toddler may try to open the bottle and waiting until the bottle is open may not provide a sufficient notice period to enable the adult to prevent an accident or correct the situation. As another example, if the SHMS determines an elderly individual is nearby or within the field of view of a knife, an alert (e.g., including an augmented reality scene or any other suitable warning) may be generated and provided to a device of the elderly individual and one or more other users within the household. In some embodiments, if the non-vulnerable user receiving the alert is not sure about the severity of the situation, such user may request the SHMS to send a video feed from the vulnerable user to review the live feed and make an assessment as to what action to take.”) comprises: providing a visual alarm on a display (Doken, [0040], “the SHMS may provide, at one or more of user devices 104, 105, 106 (or via any other suitable device, or any combination thereof), any suitable indications in any suitable form (e.g., audio, tactile, visual, or any other suitable form, or any combination thereof) associated with the potentially hazardous condition associated with the identified object”), [0049], “upon determining that user 204 is close to or interacting with object 208 (e.g., an oven or stove, or other potentially hazardous device, which may be determined based on a live camera feed proximate to object 208 or using any other suitable technique or any combination thereof) may provide a notification 206 to user 102, e.g., via television 116 or any other suitable user device”).
CLAIM 8
The combination of Doken in view of Li discloses the method of claim 7 (Doken, [0004], “systems and methods are provided herein for a smart home management system (SHMS) configured to identify, using a sensor, a location of an object in an environment, and determine a classification of the object”, [0035], “The SHMS may monitor and store any suitable type of user information associated with user 102, and may reference the particular user profile or account to determine an identity of a human (e.g., user 102) in environment 100”, [0049], “Each of user 102 and 204 may have a user profile with the SHMS specifying respective characteristics of the users“, “user 204 may not be wearing or carrying a particular device, and instead may be monitored via a baby camera or other camera positioned at one or more locations in environment 200”), wherein providing a visual alarm on a display (Doken, [0040], “the SHMS may provide, at one or more of user devices 104, 105, 106 (or via any other suitable device, or any combination thereof), any suitable indications in any suitable form (e.g., audio, tactile, visual, or any other suitable form, or any combination thereof) associated with the potentially hazardous condition associated with the identified object”), [0049], “upon determining that user 204 is close to or interacting with object 208 (e.g., an oven or stove, or other potentially hazardous device, which may be determined based on a live camera feed proximate to object 208 or using any other suitable technique or any combination thereof) may provide a notification 206 to user 102, e.g., via television 116 or any other suitable user device”) comprises: providing the visual alarm on a device remote from the subject (Doken, [0007], “the user device is associated with a user different from the identified human, and/or the user device is associated with the identified human”, [0049], “upon determining that user 204 is close to or interacting with object 208 (e.g., an oven or stove, or other potentially hazardous device, which may be determined based on a live camera feed proximate to object 208 or using any other suitable technique or any combination thereof) may provide a notification 206 to user 102, e.g., via television 116 or any other suitable user device”), user 204 could be the patient or baby and the notification is received remotely by another user that could be the doctor or the parents).
CLAIM 21
The combination of Doken in view of Li discloses the method of claim 1 (Doken, [0004], “systems and methods are provided herein for a smart home management system (SHMS) configured to identify, using a sensor, a location of an object in an environment, and determine a classification of the object”, [0035], “The SHMS may monitor and store any suitable type of user information associated with user 102, and may reference the particular user profile or account to determine an identity of a human (e.g., user 102) in environment 100”, [0049], “Each of user 102 and 204 may have a user profile with the SHMS specifying respective characteristics of the users“, “user 204 may not be wearing or carrying a particular device, and instead may be monitored via a baby camera or other camera positioned at one or more locations in environment 200”), wherein utilizing AI to recognize the detected object comprises recognizing the detected object as a non-hazardous type of object (Doken, [0037], “the SHMS detecting a toddler being within the same room as an identified object (e.g., a knife) may trigger a warning to parents of the toddler and/or the toddler, whereas the SHMS detecting an adult, such as user 102, may set a lower proximity threshold (e.g., do not provide a warning until the user is within five feet of the identified object).”, the knife is not considered as a hazardous object near an adult as long as it is not within five feet from the adult).
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Doken in view of Li in view of Fridental et al., (US 2020/0380842 A1, published 01/29/2020), hereinafter referred to as Fridental.
CLAIM 4
The combination of Doken in view of Li discloses the method of claim 3 (Doken, [0004], “systems and methods are provided herein for a smart home management system (SHMS) configured to identify, using a sensor, a location of an object in an environment, and determine a classification of the object”, [0035], “The SHMS may monitor and store any suitable type of user information associated with user 102, and may reference the particular user profile or account to determine an identity of a human (e.g., user 102) in environment 100”, [0049], “Each of user 102 and 204 may have a user profile with the SHMS specifying respective characteristics of the users“, “user 204 may not be wearing or carrying a particular device, and instead may be monitored via a baby camera or other camera positioned at one or more locations in environment 200”).
The combination of Doken in view of Li does not explicitly disclose utilizing artificial intelligence (Al) to determine whether the detected object is covering some or all of the subject's face or head.
However, Fridental teaches utilizing artificial intelligence (Al) (Fridental, [0027], “decision module 10 includes machine learning and/or artificial intelligence capabilities, such as artificial neural networks”) to determine whether the detected object is covering some or all of the subject's face or head (Fridental, [0037], “As indicated in block 310, decision module 10 may detect, for example based on a received image dataset, that the infant's face is absent in scene 90. For example, decision module 10 may be configured to detect when an infant's face is in the scene. in other embodiments, decision module 10 may be configured to detect face of a specific infant or specific group of infants. For example, processor 14 may receive an image of a specific infant and train decision module 10 to detect the face of the specific infant, Then, in case the face of the infant is absent from the received image data, decision module may detect that the infant's face is not in the scene. As indicated in block 320, decision module 10 may also detect in the scene a situation causing the absence of the face from the image data, and/or a state of tine infant. For example, module 10 may detect if the infant's face is covered by a blanket, the infant's face is directed away from the image sen scar, and/or any other suitable situation”, if the baby’s face is absent in the scene then some or all of the baby’s face is covered by the blanket).
Doken, Li, and Fridental are all considered to be analogous to the claimed invention because they are in the same field of monitoring system. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method as taught by Doken and Li to incorporate the teachings of Fridental of utilizing artificial intelligence (Al) to determine whether the detected object is covering some or all of the subject's face or head. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. The motivation for the proposed modification would have been to ensure the baby’s safety.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DENISE G ALFONSO whose telephone number is (571)272-1360. The examiner can normally be reached Monday - Friday 7:30 - 5:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached at (571)272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DENISE G ALFONSO/Examiner, Art Unit 2662
/AMANDEEP SAINI/Supervisory Patent Examiner, Art Unit 2662