DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on November 5, 2025 has been entered.
Status of Claims
The office action is being examined in response to the application filed by the Applicant on December 3, 2025.
Applicant did not show any amendments in the pending claims. But the Examiner noted that there are claim changes in contrast to the Final Office Action issued in May 7, 2025. Also, the “previously presented” claims correspond to the proposed amendments from the Advisory Action request filed in October 28, 2025. However, these amendments were not entered by the Examiner at that time and should have been indicated as new amendments in the current claims filed instead of “previous presented” claims. Thus, to advance prosecution, the amended claims are hereby entered.
Claim 13 was cancelled, while claim 21 was added.
Claims 1-12, and 14 - 21 are pending and have been examined.
This action is made NON-FINAL.
The Examiner would like to note that this application is now being handled by examiner Ivonnemary Rivera González.
Response to Arguments
Applicant's arguments filed December 3, 2025 have been fully considered but they are not persuasive.
Regarding to Applicant's arguments for Double-Patenting Rejection in pages 14 – 15: Applicant did not take any action and/or did not file an Electronic-Terminal Disclosure or e-td to obviate the Obviousness-type Double Patenting (ODP) rejection. Applicant’s arguments related to refraining from responding to this rejection has been considered, but are not persuasive to overcome the ODP rejection. Thus, the outstanding ODP rejections were updated according to the Applicant amendments and will be maintained.
Regarding to Applicant's arguments against the 101 rejection of pending claims on pages 8-12: Applicant’s arguments directed to Step 2A prong 1 and Step 2A prong 2 were considered. However, these arguments are not persuasive and the examiner respectfully disagrees for the following reasons:
For Step 2A-Prong 1 starting in p. 10: The Applicant argues that the pending claims are not directed to any of the abstract ideas identified due to different reasons that are not conceding with the MPEP 2106 practices. Therefore, the Examiner finds these arguments unpersuasive and respectfully disagrees. Because the claim language was determined based on the condition that each claim “recites” a judicial exception when the judicial exception is “set forth” or “described” in the claim (see MPEP 2106.04, subsection II). Thus, the Examiner closely examined all claim limitations individually and as a whole, and found that the steps fell under a certain method of organizing human activity and mental processes as well as mental and mathematical processes. Firstly, the claims as a whole were reciting a certain method of organizing human activity. Because identifying, analyzing and comparing objects of interest (i.e. including tenants or guests) and their behavioral characteristics (e.g. excessive noises, bringing prohibited items such as pets and/or malicious behavior) from video data acquired at least encompasses commercial interactions related to business relations in order to manage and protect rental property from guests. Similarly, these steps also falling under “managing personal behavior or relationships or interactions between people” since such objects’ behavioral analysis encompasses the management of interactions between people by analyzing and comparing their social activities to then “sending a prohibited object message…” and “occupancy message” to the user (i.e. owner and/or tenant) to warn/recommend them of actions related to their property risks or damages based on rules/violations which is directed to following rules or instructions.
As for Applicant arguments in p. 10 from Remarks regarding steps of “compositing a bounding box around the object of interest…” and “computing a confidence level for the object of interest…” as these steps are broadly recited and can still be interpreted and read as a human being capable of performing such “compositing” of a box (i.e. annotating or drawing shapes in video frame images) and “computation” of “confidence levels” for objects of interest, with a physical aid such as pen and paper. But also, this claim language in the steps lacks details of how such “bounding box” is being specifically “composed” and how the confidence levels are “computed” distinctively from a human (even with the aid of a general computer) and fails to mention which particular technological components (i.e technology) were used to perform these functions. In other words, these functional steps are broad or recited in a high level of generality. Moreover, such features can still fall under the abstract idea of mental and mathematical processes while using the computer as a tool (e.g. invoking “apply it”). Because, these steps can either be done with the help of physical aid such as pen and paper when “compositing” bounding boxes in images, as well as “computing” steps can be performed with the help of a computer to obtain confidence level results in order to analyze objects behaviors. Thus, the physical aid used does not negate the mental nature of the limitation(s), even when using other generic computer components to “composite” such specific content of boxes, which is not further specified as to how some of this “compositing” step was performed for video data analysis (see MPEP 2106.04(a)(2)(III)(B & C)).
For Step 2A-Prong 2 starting in p. 9: The Applicant alleges that the claims integrate the judicial exception identified, into a practical application and further alleges that the new added limitation of “analyzing a compressed object track of the object of interest to infer one or more behavioral characteristics of the object of interest…” recites “concrete, computer-implemented technique that improves the functioning of a remote monitoring system”. However, the Examiner disagrees because the identified limitations in the claims did not integrate a judicial exception into a practical application since the steps were merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer that further uses a Machine Learning (ML) system, or merely using a computer as a tool to perform an abstract idea (see MPEP 2106.05(f) and 2106.04(d)(I)). Specifically, the claims’ limitations are reciting the use of a generic computer with a ML system used that is generally/broadly recited, that “establish” field of view boundaries and “identify” objects of interest in order to analyze “compressed object track” of the objects for behavioral inferences and to compare object information to prohibited objects lists achieve the intended result of protecting rental property by sending messages (i.e. alerts) of occupancy and such prohibited objects in the property. Thus, the claims features are not reflecting the improvement of the “functioning of a remote monitoring system”, as alleged by the Applicant. Rather, as the Applicant asserts, a “machine learning system” is being used (e.g. applied) by the computer to perform the “technical process” (i.e. instructions) in addition to the high level of generality that the claim limitations provide to the particularity of the process functions and specific technological components used. Moreover, “to show that the involvement of a computer assists in improving the technology, the claims must recite the details regarding how a computer aids the method, the extent to which the computer aids the method, or the significance of a computer to the performance of the method. Merely adding generic computer components to perform the method is not sufficient. Thus, the claim must include more than mere instructions to perform the method on a generic component or machinery to qualify as an improvement to an existing technology”, which in this case, systems for “video-based surveillance” (see MPEP 2106.05(a)(II)). Thus, these limitations and their additional elements, individually and in combination, are not “significantly more” as these are recited in a high level of generality that cannot provide an inventive concept at Step 2B, and are not integrating the abstract idea into a practical application. (see MPEP 2106.05).
Thus, for all the reasons stated above, the Examiner respectfully disagrees, and maintains 35 USC § 101 rejection for these pending claims.
Regarding to Applicant's arguments of rejection under 35 USC § 103 for the pending claims on pages 12 – 14: Applicant’s arguments regarding these amended limitation steps in the pending claims are not persuasive and are respectfully disagreed upon. Because the Applicant is focusing on each prior art teaching, rather than focusing on the actual language claimed in each claim limitation and how their corresponding limitation steps are different from the prior art teachings while considering the broadest reasonable interpretation (BRI) of each claim. Thus, under the BRI of the claim limitations pointed by the Applicant are still reasonably taught by at least the combination of Day, Cinnamon and Lee. Finally, Applicant’s arguments fail to comply with 37 CFR 1.111(b) as they amount to general allegations that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the combination of these references. As for the argument regarding the prior art references not teaching the “concept of aggression inf erred from movement patterns” (see p. 14 from Remarks), this assertion is unpersuasive since this is not explicitly claimed in the claims. Therefore, the Examiner respectfully disagrees, and maintains 35 USC § 103 rejection for these pending claims.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
At least claims 1, 16 and 20 are rejected on the ground of nonstatutory double patenting as being unpatentable over at least claims 1, 13, 15 and 20 of U.S. Patent No. 11935114 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because the differences between the claims are considered to be anticipated as set forth below:
Instant claims
Co-pending or reference claims
(US Patent No. 11935114 B2)
Claims 1, 16 and 20: A computer-implemented method for monitoring and protecting a rental property, comprising: (claim 1)
acquiring video data of an entrance for the rental property from a digital camera, wherein the video data comprises video frame images;
establishing a boundary within a field of view of the digital camera, wherein the boundary is associated with the entrance;
using a machine learning system to identify an object of interest and an object track of the object of interest, wherein identifying the object of interest comprises:
compositing a bounding box around the object of interest on one or more video frame images, based on motion analysis; and
computing a confidence level for the object of interest based on image data within the bounding box;
…refer to next row
in response to the confidence level exceeding a predetermined threshold, obtaining object information for the object, wherein the object information includes an object type and a category for the object of interest;
comparing the object information for the identified object to a list of prohibited objects for the rental property; and
sending a prohibited object message to a remote computing device in response to the object information being in the list of prohibited objects;
in response to identifying the object of interest traversing the boundary, updating an occupancy count for the rental property; and
sending an occupancy message to the remote computing device, wherein the occupancy message includes the updated occupancy count.
Claims 1, 15 and 20: A computer-implemented method for monitoring and protecting a rental property, comprising: (claim 1)
acquiring video data of an entrance for the rental property from a digital camera;
establishing a boundary within a field of view of the digital camera, wherein the boundary is associated with the entrance;
using a machine learning system to identify an object of interest that is being brought by a person traversing the boundary, wherein identifying the object of interest comprises:
establishing a bounding box around the object of interest based on motion analysis: and
computing a confidence level for the object of interest based on image data within the bounding box:
in response to the confidence level exceeding a predetermined threshold, obtaining object information for the object, wherein the object information includes an object type and a category for the object of interest
comparing the object information for the identified object to a list of prohibited objects for the rental property;
sending a prohibited object message to a remote computing device in response to the object information being in the list of prohibited objects;
in response to identifying the object of interest traversing the boundary, updating an occupancy count for the rental property; and
sending an occupancy message to the remote computing device, wherein the occupancy message includes the updated occupancy count.
(Claim 1 cont.): analyzing a compressed object track of the object of interest to infer one or more behavioral characteristics of the object of interest, wherein the behavioral characteristics include aggression or calmness based on movement patterns;
Claim 13: wherein the machine learning system is configured and disposed to identify behavior, aggression and characteristics of one or more people in the field of view of the digital camera by analyzing the compressed object tracks of the people
Consequently, at least instant claim 1 is covered by reference claim 1 and 13 from claims 1, 13, 15 and 20 of U.S. Patent No. 11935114 B2. Thus, these instant claims are anticipated by the patent reference claims because both applicant’s pending application and the reference patent cover every feature claim in which the instant claims are broadly recited and encompass the same disclosed technology. Moreover, both the instant claims and the reference patent claims share the same invention title which is directed to a method and a system for monitoring and protecting a rental property via image/video data analysis and identification of objects of interests and their behaviors and movement patterns to provide tenant notices of impermissible activities based on property rules violation and update occupancy count (see MPEP 804 (II)(B)(2) for more details).
Claim Objections
A claim is objected to because of the following informalities:
Claims 1, 16 and 20 contains the identifier of “(Previously Presented)” when in fact it should have been marked as “(Currently Amended)” for containing new limitations compared to the Office action (OA) issued in May 7, 2025. Examiner notes that the new limitations were from the last Advisory Action request filed in October 28, 2025 which was not entered. Thus, the status of every claim in such listing must be indicated after its claim number in order to avoid delays in prosecution, see MPEP 714 and 37 CFR 1.121(c). However, for purposes of advancing compact prosecution, these claims were examined and treated as containing a “(Currently Amended)” identifier.
Claim 21 is objected to because the claim is containing the identifier of “(Previously Presented)” when in fact it should have been marked as “(New)” based on last OA issued in May 7, 2025. Thus, the status of every claim in such listing must be indicated after its claim number in order to avoid delays in prosecution, see MPEP 714 and 37 CFR 1.121(c). However, for purposes of advancing compact prosecution, this claim was examined and treated as containing a “(New)” identifier.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1 - 12 and 14 - 21 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The analysis of this claimed invention recited in the claims begins in view of independent claims 1 and 16, the most representative claims of the independent claims set 1, 16 and 20, as follows:
At Step 1: Claims 1 – 12, 14 - 15 and 21 falls under statutory category of a process, while claims 16 – 20 are directed to a machine.
At Step 2A Prong 1: Claim 1 (representative of claims 16 and 20) recites an abstract idea in the following limitations:
acquiring video data of an entrance for the rental property…, wherein the video data comprises video frame images;
establishing a boundary within a field of view of the digital camera, wherein the boundary is associated with the entrance;
…identify an object of interest and an object track of the object of interest, wherein identifying the object of interest comprises:
compositing a bounding box around the object of interest on one or more video frame images, based on motion analysis; and
computing a confidence level for the object of interest based on image data within the bounding box;
analyzing a compressed object track of the object of interest to infer one or more behavioral characteristics of the object of interest, wherein the behavioral characteristics include aggression or calmness based on movement patterns;
in response to the confidence level exceeding a predetermined threshold, obtaining object information for the object, wherein the object information includes an object type and a category for the object of interest;
comparing the object information for the identified object to a list of prohibited objects for the rental property; and
sending a prohibited object message…in response to the object information being in the list of prohibited objects;
in response to identifying the object of interest traversing the boundary, updating an occupancy count for the rental property; and
sending an occupancy message…wherein the occupancy message includes the updated occupancy count.
Generally, and as disclosed in the specification in ¶0023, this claimed invention provides “systems and methods for hotels and rental property monitoring and protection.” However, the abstract idea(s) of a certain method of organizing human activity (See MPEP 2106.04(a)(2), subsection II) is recited in claim 1 in the form of “commercial or legal interactions”. Specifically, the abstract idea is recited in at least the steps of “acquiring video data of an entrance for the rental property…”, “…identify an object of interest and an object track of the object of interest…”, “analyzing a compressed object track of the object of interest to infer one or more behavioral characteristics of the object of interest…” and “comparing the object information for the identified object to a list of prohibited objects for the rental property…”. Because identifying, analyzing and comparing objects of interest (i.e. including humans as tenants or guests) and their behavioral characteristics (e.g. excessive noises, bringing prohibited items such as pets and/or malicious behavior; see ¶0021 from Applicant disclosure) from video data acquired at least encompasses commercial interactions related to business relations in order to manage and protect rental property from guests. Similarly, these steps also fall under the abstract idea sub-group of “managing personal behavior or relationships or interactions between people” since such objects’ (i.e. tenants/guests; see ¶0023 from Applicant disclosure) behavioral analysis encompasses the management of interactions between people by analyzing and comparing their social activities to then “sending a prohibited object message…” and “occupancy message” to the user (i.e. owner and/or tenant) to warn/recommend them of actions related to their property risks or damages based on rules/violations which is directed to following rules or instructions (see ¶0064 and ¶0070 and Figs. 10 – 12 from Applicant disclosure).
The steps of “…identify an object of interest and an object track of the object of interest…”, “compositing a bounding box around the object of interest on one or more video frame images…”, “analyzing a compressed object track of the object of interest to infer one or more behavioral characteristics of the object of interest…” and “comparing the object information for the identified object to a list of prohibited objects for the rental property…” fall under the abstract idea of mental processes that can be practically be performed in the human mind or in pen and paper (See MPEP 2106.04(a)(2), subsection III). Because identifying, analyzing and comparing objects of interest and their behavioral characteristics and creating a bounding box around a suspected object of interest requires observation, evaluation, judgement and opinion. Also, these steps can either be done with the help of physical aid such as pen and paper or can be performed by humans without or with the assistance of a computer (e.g. tool). Thus, the steps do not negate and further still reads in the mental nature of the limitation(s), when identifying, analyzing, comparing and compositing such object information, as well as the concept is merely claimed to be performed on a generic computer and is merely using a computer as a tool to perform the concept of object information analysis (see MPEP 2106.04(a)(2)(III)(B & C)).
As for the step of “compute a confidence level for the object of interest based on image data”, it encompasses mathematical concepts (i.e. determining probability values; see ¶0061 from Applicant disclosure) that can be performed mentally or in pen and paper and requires specific mathematical calculations.
At Step 2A Prong 2: For independent claims 1, 16 and 20, The judicial exception(s) or abstract idea previously identified is not integrated into a practical application (see MPEP 2106.04 (d)). The claims recite the additional element(s) of a digital camera, a machine learning system, a remote computing device (from claims 1, 16 and 20); an electronic computation device, a processor and a memory (from claim 16) and a computer program product (from claim 20). These additional elements, individually and in combination, and while considering the claims as a whole, are merely used as a tool to perform the abstract idea (See MPEP 2106.05(f)). Specifically, the claim limitations are recited as being performed by the computer while using a “machine learning system” to establish boundary boxes and perform the object information analysis (i.e. comparison and identification against the “list of prohibited objects”). The computer that further “uses” the machine learning system, as claimed, are recited at a high level of generality that is being used as a tool to perform the generic computer functions for identifying, analyzing and comparing objects of interest and their behavioral characteristics and creating bounding boxes around suspected objects. Thus, these steps mentioned above are further describing and applying the abstract idea without placing any limits on how the technological components are being improved, while distinguishing in the claim language, the performing limitations from functions that generic computer components can perform.
Step 2B: For independent claims 1, 16 and 20, these claims do not provide an inventive concept. The recited additional elements of the claim(s) are the following: a digital camera, a machine learning system, a remote computing device (from claims 1, 16 and 20); an electronic computation device, a processor and a memory (from claim 16) and a computer program product (from claim 20). These additional elements are not sufficient to amount significantly more than the judicial exception or abstract idea (see MPEP 2106.05). Because, as indicated in Step 2A Prong 2, these additional element(s) claimed are merely, instructions to “apply” the abstract ideas, which cannot provide an inventive concept. Thus, even when considered in combination, these additional elements represent mere instructions to implement an abstract idea or other exception on a computer, which do not provide an inventive concept at Step 2B.
For dependent claims 2-15, 17-19 and 21, the same analysis is incorporated. Due to their dependency to the independent claims analyzed, these claims cover or fall under the same abstract idea(s) of a method of organizing human activity, mental and mathematical processes. They describe additional limitations steps of:
Claims 2-15, 17-19 and 21: further describes the abstract idea of the computer-implemented method as embellishments that are directed to aspects of the information monitored within the property management system which is the central theme of the abstract idea identified above. Thus, being directed to the abstract idea group of “commercial or legal interactions” (e.g. interactions for business relations), “managing personal behavior or relationships or interactions between people” (e.g. monitoring social activities and following rules or instructions) as well as mental processes involving evaluation and judgment and mathematical processes involving mathematical calculations.
Step 2A Prong 2 and Step 2B: For dependent claims 2-15, 17-19 and 21, these claims do not include additional elements. Rather what is claimed simply further defines the same abstract idea that was set forth in independent claims 1, 16 and 20. Nothing additional is claimed that is not part of the abstract idea.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-12, and 14 - 21 are rejected under 35 U.S.C. 103 as being unpatentable over Day (U.S. Patent No. 11645706 B1) in view of Cinnamon (U.S. Patent No. 9996890 B1) in further view of Lee (U.S. Pub No. 20210158048 A1).
Regarding claim 1:
Day teaches:
acquiring video data of an entrance for the rental property from a digital camera, wherein the video data comprises video frame images; (In C8; L2 – 7; Figs. 2 – 3; Fig. 5 (202a – 202n and 104): teaches that “camera systems 104 a-104 n may perform the computer vision operations to extract data about the video frames (e.g., how many people are detected in a video frame, the type of pet detected, a current audio level, etc.)”. Refer to C3; L30 – 37 for an example wherein “a party is detected may be determined based on using computer vision to detect people and counting the number of people present at the location” and to C4; L1 – 3 for the system “sensing camera” that “may check for the number of people, pets, music etc. as defined by the on-line rental application contract completed by the renter and the property owner.”)
establishing a boundary within a field of view of the digital camera, wherein the boundary is associated with the entrance; (In C14; L6 – 9; Fig. 2 (152a – 152b and 50a – 50n): teaches that “Each of the camera systems 104 a-104 n are shown having the field of view 152 a-152 b. In the example shown, the locations 50 a-50 n may be the subject of the monitoring” as shown in Fig. 2. Further, the “rental property owner may provide the people 70 a-70 n with the rental agreement 122” that comprises a “list of restrictions” such as “various entries that may comprise a number of people, disallowed animals, noise levels and/or behaviors” and these restrictions “the list of restrictions may be converted to parameters that may be used by the computer vision operations and/or the audio analytics to perform the detection” (see C14; L10 – 15))
using a machine learning system to identify an object of interest and an object track of the object of interest, wherein identifying the object of interest comprises: (In C32; L3 – 10; Fig. 4 (240a – 240n and 226): teaches that the system “CNN module 240 b may conduct inferences against the machine learning model (e.g., to perform object detection)”.)
compositing a bounding box around the object of interest on one or more video frame images, based on motion analysis; and (In C32; L56 – 61; Fig. 2 (162): teaches “CNN module 240 b may execute a data flow directed to feature extraction and matching, including “component operators that manipulate lists of components (e.g., components may be regions of a vector that share a common attribute and may be grouped together with a bounding box)”. Refer to C10; L25 – 27 for an example wherein a “dotted box 162 is shown around the head of the person 70 c” and it “may represent the camera system 104 detecting characteristics of the object 16” as further showed in Fig. 2, element 162. As for the motion analysis basis, the “CNN module 240 b” may be configured to perform feature extraction and/or matching solely in hardware” and may track “the feature points temporally, an estimate of ego-motion of the capturing platform or a motion model of observed objects in the scene may be generated” (see C31; L7 – 13).)
comparing the object information for the identified object to a list of prohibited objects for the rental property; and (In C17; L21 – 28; Fig. 5 (170); Fig. 6: teaches “the camera systems 104 a-104 n may be configured to detect when each of the people 70 a-70 n first arrive and then compare the people count 170 a-170 e to a threshold (e.g., based on the entry in the list of restrictions). For example, the camera systems 104 a-104 n may determine whether a party is being held at the rental property based on various parameters (e.g., people count, loud noises, music, etc.)”. Another example of such comparison claimed is “if the landlord does not list pets as an entry on the list of restrictions, the computer vision operations may not search for pets.” (see C17; L43 – 45).)
sending a prohibited object message to a remote computing device in response to the object information being in the list of prohibited objects; (In C14; L22 – 27; Figs. 6 – 7: teaches “If the data detected by the camera systems 104 a-104 n matches any of the entries on the list of restrictions, the camera system 104 may generate a notification. The notification may be a warning to the people 70 a-70 n to cure the cause of the warning. The notification may be provided to the rental property owner.” Further, the notification may “provide the list of restrictions and indicate that a violation has been detected. In some embodiments, the renter may be able to respond to the notification. The response to the notification may be sent to the landlord (e.g., to acknowledge the notification and confirm they have taken action to correct the violation)” (see C17; L35 – 41).)
in response to identifying the object of interest traversing the boundary, updating an occupancy count for the rental property; and sending an occupancy message to the remote computing device, wherein the occupancy message includes the updated occupancy count. (In C12; L32 – 44; Figs. 2 and 6 – 7: teaches an example that “If six guests is one of the detection parameters, then the camera system 104 may analyze video frames generated to count the number of the guests 70 a-70 n to determine if the number of guests is less than, equal to or greater than six guests. In the example shown, the extracted data 170 a-170 e about the number of the guests 70 a-70 e may indicate five guests are at the rental property 50. Since the number of guests is less than the amount in the detection parameters, then the camera system 104 may not indicate a breach has been detected”. However, “if more than six guests were detected, then the camera system 104 may generate the notification signal NTF to indicate that a breach of the rental agreement 122 has been detected” which is directed to at least updating occupancy count. Generally, the system does count the number of people at the rental property and share an occupancy message, for example, “each camera system 104 a-104 n may perform the computer vision operations to determine the number count 170 a-170 e of people and share the number count 170 a-170 e to determine a total number of occupants at the rental property” as further shown in Fig. 6 (see C16; L26 – 30).)
Day does not explicitly teach the abilities of computing a confidence level for the object of interest and in response to confidence level exceeding a threshold, obtain object information with object type and a category. However, Cinnamon teaches:
computing a confidence level for the object of interest based on image data within the bounding box; (In C8; L13 – 25; Fig. 4 (408): teaches that the “the classifier may apply a function to the n-dimensional vector resulting from the application of the weight matrix to generate a probability distribution that indicates between the pixels within a given candidate bounding box and the classes of items defined by the training data during the training phase” wherein such function applied to “the n-dimensional vector may be a softmax function, which generates a probability distribution comprising a set of probability values” (i.e. “confidence values”). Refer to C5; L1 – 19 and C6; L29 – 50 for more details of the “classification engine” normalizing images, using a “segmenter” to identify “candidate bounding boxes” within a “candidate image” and categorizing the “the contents of the given candidate bounding box” by comparing similarity “between the contents of a given candidate bounding box and a given class as a respective probability value” (i.e. also represented as “confidence values”) that combined for each class form “a probability distribution” (see C8; L21 – 50).)
in response to the confidence level exceeding a predetermined threshold, obtaining object information for the object, wherein the object information includes an object type and a category for the object of interest; (In C8; L32 – 40; Fig. 4 (408): teaches “Once the classifier has determined a probability distribution for a given candidate bounding box, the classifier determines whether any of the confidence values in the distribution exceed a given threshold value, e.g. 0.95. If a probability for a given candidate bounding box exceeds the threshold value, the classification engine may classify the candidate bounding box as the class of item that meets the threshold probability, thereby identifying the candidate bounding box as the class of item”. Refer to C20 – 21; L65 – 67 and L1 – 11 for an example wherein the “If the probability for a given candidate bounding box exceeds the threshold confidence value, classification engine 104 may classify the given bounding box as containing the item that meets or exceeds the threshold confidence value, thereby identifying an item in the given candidate bounding box. In some embodiments, classification engine 104 may determine that individual pixels or regions of the data contain insufficient information to make classification determinations with a sufficiently high confidence level.”)
It would have been obvious to one of ordinary skill in the art before the earliest effective filing date of the claimed invention to modify Day to provide the abilities of computing a confidence level for the object of interest and in response to confidence level exceeding a threshold, obtain object information with object type and a category, as taught by Cinnamon in order to “improving the identification of items within an object, e.g. items in objects such as pieces of baggage or a scene” (C4; L36 – 39; Cinnamon), see also MPEP 2143.I.G. Further such abilities provided by Cinnamon into Day system would have been obvious because the claimed invention is merely a simple arrangement of old elements, with each performing the same function it had been known to perform, yielding no more than one would expect from such arrangement. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 406 (2007). In other words, all of the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results to one of ordinary skill in the art at the time of the invention (i.e., predictable results are obtained by adding the well-known feature of establishing a bounding box around the object of interest based on motion analysis and computing a confidence level for the object of interest based on image data within the bounding box into the a computer-implemented method for monitoring and protecting a rental property incorporating machine learning techniques and bounding box techniques). See also MPEP § 2143(I)(A).
Day teaches the “behavioral analysis” by determining what the objects are doing (see C11; L47 – 67; Day) and inferences of objects behavior based on “regular or rhythmic body movement may be determined” (see C11; L47 – 67; Day). Also, Day teaches that the system’s “CNN module 240b may be configured to perform feature extraction” with “feature extraction points” that represents “interesting areas in the video frames (e.g., corners, edges, etc.)” which can be tracked to generate “a motion model of observed objects in the scene” and with the help of a “matching algorithm” incorporated in the “CNN module 240b” the “most probable correspondences between feature points in a reference video frame and a target video frame” can be found. Thus, during the process of matching “pairs of reference and target feature points, each feature point may be represented by a descriptor (e.g., image patch, SIFT, BRIEF, ORB, FREAK, etc.)” (see C31; L7 – 20; Day), that can be an implicit example of compressed object tracks analysis. However, neither Day or Cinnamon explicitly teach the ability of analyzing a specific compressed object track, of the object of interest to infer its behavioral characteristics. However, Lee teaches (i.e. relying on the provisional date of 11/26/2019 for Provisional Application 62/940,431):
analyzing a compressed object track of the object of interest to infer one or more behavioral characteristics of the object of interest, wherein the behavioral characteristics include aggression or calmness based on movement patterns; (In ¶0063 – 64; Fig. 3 (305, 306 and 308): teaches a “camera 302” that captures a “video 305” that “also includes image frames of the person 306 deviating from the path 308, and moving towards a side of the property 304”, in accordance to ¶0063 from Applicant disclosure. Further, in “FIG. 3, the monitoring server 330 receives image data 312” that includes “images of the person 306 approaching the front door 342 on the path 308, the person 306 deviating from the path 308, and the person approaching the side of the property 304.” As for inferring behavioral characteristics after analyzing an object’s compressed object track, this prior art teaches an example wherein “the monitoring server 330 assigns values less than 1.0 for the monitoring system data 314 indicating the system status of “armed,” the time 2:05 am, and the front door 342 locked. The system status, time of day, and front door status indicate that it is late at night, and the resident 334 is not likely expecting visitors. Therefore, the monitoring server 330 assigns a value of 0.90 to the system status of “armed,” 0.95 to the time of 2:05 am, and 0.95 to the front door 342 locked. These three monitoring system data points have the effect of lowering the effective threshold for detecting abnormal events. Thus, the monitoring server 330 is more likely to determine that the person 306 approaching the side of the property 304 is an abnormal event, based on these factors” and as shown in Fig. 3 along with their “path 308” (see ¶0068 – 70).)
It would have been obvious to one of ordinary skill in the art before the earliest effective filing date of the claimed invention to modify Day and Cinnamon to provide the ability of analyzing a specific compressed object track, of the object of interest to infer its behavioral characteristics, as taught by Cinnamon in order to “accurately detect and classify events in order to send valid notifications to the residents” as well as “differentiate between normal and abnormal events” (¶0006; Lee), see also MPEP 2143.I.G. Further such ability provided by Lee into Day and Cinnamon systems would have been obvious because the claimed invention is merely applying a known technique to a known method ready for improvement to yield predictable results. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 406 (2007). In other words, all of the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results to one of ordinary skill in the art at the time of the invention (i.e., predictable results are obtained by applying the known technique of monitoring property using machine learning wherein the path of each person is monitored and stored to the known method and system for tracking objects such as humans on rental properties using image processing monitoring the movements and activities of objects to differentiate between normal and abnormal events). See also MPEP § 2143(I)(D).
Regarding claims 16 and 20:
Day further teaches:
a processor; a memory coupled to the processor, the memory containing instructions, that when executed by the processor, cause the electronic computation device to: (See Fig. 1 (102a - n,110a and 112a): Refer to C4; L7 – 26 for more system details.)
acquire video data of an entrance for a rental property from a digital camera, wherein the video data comprises video frame images; (In C8; L2 – 7; Figs. 2 – 3; Fig. 5 (202a – 202n and 104): teaches that “camera systems 104 a-104 n may perform the computer vision operations to extract data about the video frames (e.g., how many people are detected in a video frame, the type of pet detected, a current audio level, etc.)”. Refer to C3; L30 – 37 for an example wherein “a party is detected may be determined based on using computer vision to detect people and counting the number of people present at the location” and to C4; L1 – 3 for the system “sensing camera” that “may check for the number of people, pets, music etc. as defined by the on-line rental application contract completed by the renter and the property owner.”)
use a machine learning system to establish a boundary within a field of view of the digital camera, wherein the boundary is associated with the entrance; (In C14; L6 – 9; Fig. 2 (152a – 152b and 50a – 50n): teaches that “Each of the camera systems 104 a-104 n are shown having the field of view 152 a-152 b. In the example shown, the locations 50 a-50 n may be the subject of the monitoring” as shown in Fig. 2. Further, the “rental property owner may provide the people 70 a-70 n with the rental agreement 122” that comprises a “list of restrictions” such as “various entries that may comprise a number of people, disallowed animals, noise levels and/or behaviors” and these restrictions “the list of restrictions may be converted to parameters that may be used by the computer vision operations and/or the audio analytics to perform the detection” (see C14; L10 – 15). Refer to C31; L44 – 54 and C30; L57 – 65 wherein the “CNN module” of the system (directed to the Machine Learning (ML) system) is in charge of performing the “computer vision operations” and “may perform the object detection to determine regions of the video frame that have a high likelihood of matching the particular object” including the position of detected objects (see C31 - 32; L55 – 67 and L1 – 2) as well as “implement recognition of the objects 160 a-160 n through multiple layers of feature detection”)
use the machine learning system to identify an object of interest and an object track of the object of interest, wherein identifying the object of interest is accomplished by instructions that cause the electronic computation device to: (In C32; L3 – 10; Fig. 4 (240a – 240n and 226): teaches that the system “CNN module 240 b may conduct inferences against the machine learning model (e.g., to perform object detection)”.)
composite a bounding box around the object of interest on one or more video frame images based on motion analysis; and (In C32; L56 – 61; Fig. 2 (162): teaches “CNN module 240 b may execute a data flow directed to feature extraction and matching, including “component operators that manipulate lists of components (e.g., components may be regions of a vector that share a common attribute and may be grouped together with a bounding box)”. Refer to C10; L25 – 27 for an example wherein a “dotted box 162 is shown around the head of the person 70 c” and it “may represent the camera system 104 detecting characteristics of the object 16” as further showed in Fig. 2, element 162. As for the motion analysis basis, the “CNN module 240 b” may be configured to perform feature extraction and/or matching solely in hardware” and may track “the feature points temporally, an estimate of ego-motion of the capturing platform or a motion model of observed objects in the scene may be generated” (see C31; L7 – 13).)
compare the object information for the identified object to a list of prohibited objects for the rental property; and (In C17; L21 – 28; Fig. 5 (170); Fig. 6: teaches “the camera systems 104 a-104 n may be configured to detect when each of the people 70 a-70 n first arrive and then compare the people count 170 a-170 e to a threshold (e.g., based on the entry in the list of restrictions). For example, the camera systems 104 a-104 n may determine whether a party is being held at the rental property based on various parameters (e.g., people count, loud noises, music, etc.)”. Another example of such comparison claimed is “if the landlord does not list pets as an entry on the list of restrictions, the computer vision operations may not search for pets.” (see C17; L43 – 45).)
send a prohibited object message to a remote computing device in response to the object information being in the list of prohibited objects. (In C14; L22 – 27; Figs. 6 – 7: teaches “If the data detected by the camera systems 104 a-104 n matches any of the entries on the list of restrictions, the camera system 104 may generate a notification. The notification may be a warning to the people 70 a-70 n to cure the cause of the warning. The notification may be provided to the rental property owner.” Further, the notification may “provide the list of restrictions and indicate that a violation has been detected. In some embodiments, the renter may be able to respond to the notification. The response to the notification may be sent to the landlord (e.g., to acknowledge the notification and confirm they have taken action to correct the violation)” (see C17; L35 – 41).)
Day does not explicitly teach the abilities of computing a confidence level for the object of interest and in response to confidence level exceeding a threshold, obtain object information with object type and a category. However, Cinnamon teaches:
compute a confidence level for the object of interest based on image data within the bounding box; (In C8; L13 – 25; Fig. 4 (408): teaches that the “the classifier may apply a function to the n-dimensional vector resulting from the application of the weight matrix to generate a probability distribution that indicates between the pixels within a given candidate bounding box and the classes of items defined by the training data during the training phase” wherein such function applied to “the n-dimensional vector may be a softmax function, which generates a probability distribution comprising a set of probability values” (i.e. “confidence values”). Refer to C5; L1 – 19 and C6; L29 – 50 for more details of the “classification engine” normalizing images, using a “segmenter” to identify “candidate bounding boxes” within a “candidate image” and categorizing the “the contents of the given candidate bounding box” by comparing similarity “between the contents of a given candidate bounding box and a given class as a respective probability value” (i.e. also represented as “confidence values”) that combined for each class form “a probability distribution” (see C8; L21 – 50).)
in response to the confidence level exceeding a predetermined threshold, obtain object information for the object, wherein the object information includes an object type and a category for the object of interest; (In C8; L32 – 40; Fig. 4 (408): teaches “Once the classifier has determined a probability distribution for a given candidate bounding box, the classifier determines whether any of the confidence values in the distribution exceed a given threshold value, e.g. 0.95. If a probability for a given candidate bounding box exceeds the threshold value, the classification engine may classify the candidate bounding box as the class of item that meets the threshold probability, thereby identifying the candidate bounding box as the class of item”. Refer to C20 – 21; L65 – 67 and L1 – 11 for an example wherein the “If the probability for a given candidate bounding box exceeds the threshold confidence value, classification engine 104 may classify the given bounding box as containing the item that meets or exceeds the threshold confidence value, thereby identifying an item in the given candidate bounding box. In some embodiments, classification engine 104 may determine that individual pixels or regions of the data contain insufficient information to make classification determinations with a sufficiently high confidence level.”)
It would have been obvious to one of ordinary skill in the art before the earliest effective filing date of the claimed invention to modify Day to provide the abilities of computing a confidence level for the object of interest and in response to confidence level exceeding a threshold, obtain object information with object type and a category, as taught by Cinnamon in order to “improving the identification of items within an object, e.g. items in objects such as pieces of baggage or a scene” (C4; L36 – 39; Cinnamon), see also MPEP 2143.I.G. Further such abilities provided by Cinnamon into Day system would have been obvious because the claimed invention is merely a simple arrangement of old elements, with each performing the same function it had been known to perform, yielding no more than one would expect from such arrangement. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 406 (2007). In other words, all of the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results to one of ordinary skill in the art at the time of the invention (i.e., predictable results are obtained by adding the well-known feature of establishing a bounding box around the object of interest based on motion analysis and computing a confidence level for the object of interest based on image data within the bounding box into the a computer-implemented method for monitoring and protecting a rental property incorporating machine learning techniques and bounding box techniques). See also MPEP § 2143(I)(A).
Day teaches the “behavioral analysis” by determining what the objects are doing (see C11; L47 – 67; Day) and inferences of objects behavior based on “regular or rhythmic body movement may be determined” (see C11; L47 – 67; Day). Also, Day teaches that the system’s “CNN module 240b may be configured to perform feature extraction” with “feature extraction points” that represents “interesting areas in the video frames (e.g., corners, edges, etc.)” which can be tracked to generate “a motion model of observed objects in the scene” and with the help of a “matching algorithm” incorporated in the “CNN module 240b” the “most probable correspondences between feature points in a reference video frame and a target video frame” can be found. Thus, during the process of matching “pairs of reference and target feature points, each feature point may be represented by a descriptor (e.g., image patch, SIFT, BRIEF, ORB, FREAK, etc.)” (see C31; L7 – 20; Day), that can be an implicit example of compressed object tracks analysis. However, neither Day or Cinnamon explicitly teach the ability of analyzing a specific compressed object track, of the object of interest to infer its behavioral characteristics. However, Lee teaches (i.e. relying on the provisional date of 11/26/2019 for Provisional Application 62/940,431):
analyze a compressed object track of the object of interest to infer one or more behavioral characteristics of the object of interest, wherein the behavioral characteristics include aggression or calmness based on movement patterns; (In ¶0063 – 64; Fig. 3 (305, 306 and 308): teaches a “camera 302” that captures a “video 305” that “also includes image frames of the person 306 deviating from the path 308, and moving towards a side of the property 304”, in accordance to ¶0063 from Applicant disclosure. Further, in “FIG. 3, the monitoring server 330 receives image data 312” that includes “images of the person 306 approaching the front door 342 on the path 308, the person 306 deviating from the path 308, and the person approaching the side of the property 304.” As for inferring behavioral characteristics after analyzing an object’s compressed object track, this prior art teaches an example wherein “the monitoring server 330 assigns values less than 1.0 for the monitoring system data 314 indicating the system status of “armed,” the time 2:05 am, and the front door 342 locked. The system status, time of day, and front door status indicate that it is late at night, and the resident 334 is not likely expecting visitors. Therefore, the monitoring server 330 assigns a value of 0.90 to the system status of “armed,” 0.95 to the time of 2:05 am, and 0.95 to the front door 342 locked. These three monitoring system data points have the effect of lowering the effective threshold for detecting abnormal events. Thus, the monitoring server 330 is more likely to determine that the person 306 approaching the side of the property 304 is an abnormal event, based on these factors” and as shown in Fig. 3 along with their “path 308” (see ¶0068 – 70).)
It would have been obvious to one of ordinary skill in the art before the earliest effective filing date of the claimed invention to modify Day and Cinnamon to provide the ability of analyzing a specific compressed object track, of the object of interest to infer its behavioral characteristics, as taught by Cinnamon in order to “accurately detect and classify events in order to send valid notifications to the residents” as well as “differentiate between normal and abnormal events” (¶0006; Lee), see also MPEP 2143.I.G. Further such ability provided by Lee into Day and Cinnamon systems would have been obvious because the claimed invention is merely applying a known technique to a known method ready for improvement to yield predictable results. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 406 (2007). In other words, all of the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results to one of ordinary skill in the art at the time of the invention (i.e., predictable results are obtained by applying the known technique of monitoring property using machine learning wherein the path of each person is monitored and stored to the known method and system for tracking objects such as humans on rental properties using image processing monitoring the movements and activities of objects to differentiate between normal and abnormal events). See also MPEP § 2143(I)(D).
Regarding claim 2:
The combination of Day and Cinnamon, as shown in the rejection above, discloses the limitations of claim 1.
Day further teaches:
wherein the object of interest comprises a person. (In C11 – 12; L 64 – 67 and L1 – 11: teaches an example wherein “the characteristics 162 may correspond to a face of the person 70 c (e.g., the detected object 160 c). The characteristics 162 may be determined for each of the detected objects 160 a-160 e (e.g., the people 70 a-70 e, items held by the people 70 a-70 e, other items in the location 50, etc.). The characteristics 162 may comprise a color of the detected objects 160 a-160 e (e.g., color of clothing worn). The characteristics 162 may comprise the size of objects (e.g., a height of a person). The characteristics 162 may comprise a classification of the detected objects 160 a-160 e (e.g., recognizing the people 70 a-70 e as distinct people, identifying an item as a television, recognizing an animal, etc.). In some embodiments, the characteristics 162 may be used by the camera system 104 to distinguish between the detected objects 160 a-160 e.”.)
Regarding claim 3:
The combination of Day and Cinnamon, as shown in the rejection above, discloses the limitations of claim 1.
Day further teaches:
wherein the object of interest comprises an item. (In C11 – 12; L 64 – 67 and L1 – 11: teaches an example wherein “the characteristics 162 may correspond to a face of the person 70 c (e.g., the detected object 160 c). The characteristics 162 may be determined for each of the detected objects 160 a-160 e (e.g., the people 70 a-70 e, items held by the people 70 a-70 e, other items in the location 50, etc.). The characteristics 162 may comprise a color of the detected objects 160 a-160 e (e.g., color of clothing worn). The characteristics 162 may comprise the size of objects (e.g., a height of a person). The characteristics 162 may comprise a classification of the detected objects 160 a-160 e (e.g., recognizing the people 70 a-70 e as distinct people, identifying an item as a television, recognizing an animal, etc.). In some embodiments, the characteristics 162 may be used by the camera system 104 to distinguish between the detected objects 160 a-160 e.”.)
Regarding claim 4:
The combination of Day and Cinnamon, as shown in the rejection above, discloses the limitations of claim 2.
Day further teaches:
wherein the updating includes incrementing the occupancy count in response to detecting the person traversing the boundary towards the entrance. (In C12; L32 – 44; Figs. 6 – 7: teaches an example wherein “If six guests is one of the detection parameters, then the camera system 104 may analyze video frames generated to count the number of the guests 70a-70n to determine if the number of guests is less than, equal to or greater than six guests. In the example shown, the extracted data 170a-170e about the number of the guests 70a-70e may indicate five guests are at the rental property 50. Since the number of guests is less than the amount in the detection parameters, then the camera system 104 may not indicate a breach has been detected. If more than six guests were detected, then the camera system 104 may generate the notification signal NTF to indicate that a breach of the rental agreement 122 has been detected.” Generally, the system does count the number of people at the rental property and share an occupancy message, for example, “each camera system 104 a-104 n may perform the computer vision operations to determine the number count 170 a-170 e of people and share the number count 170 a-170 e to determine a total number of occupants at the rental property” as further shown in Fig. 6 (see C16; L26 – 30). Also, refer to C11; L3-21 and L23-45 and C16; L12-37 for more examples of occupancy counts based on people entering or leaving a location.)
Regarding claim 5:
The combination of Day and Cinnamon, as shown in the rejection above, discloses the limitations of claim 2.
Day further teaches:
wherein the updating includes incrementing the occupancy count in response to detecting the person traversing the boundary away from the entrance. (In C12; L32 – 44; Figs. 6 – 7: teaches an example wherein “If six guests is one of the detection parameters, then the camera system 104 may analyze video frames generated to count the number of the guests 70a-70n to determine if the number of guests is less than, equal to or greater than six guests. In the example shown, the extracted data 170a-170e about the number of the guests 70a-70e may indicate five guests are at the rental property 50. Since the number of guests is less than the amount in the detection parameters, then the camera system 104 may not indicate a breach has been detected. If more than six guests were detected, then the camera system 104 may generate the notification signal NTF to indicate that a breach of the rental agreement 122 has been detected.” Generally, the system does count the number of people at the rental property and share an occupancy message, for example, “each camera system 104 a-104 n may perform the computer vision operations to determine the number count 170 a-170 e of people and share the number count 170 a-170 e to determine a total number of occupants at the rental property” as further shown in Fig. 6 (see C16; L26 – 30). Also, refer to C11; L3-21 and L23-45 and C16; L12-37 for more examples of occupancy counts based on people entering or leaving a location.)
Regarding claim 6:
The combination of Day and Cinnamon, as shown in the rejection above, discloses the limitations of claim 1.
Day further teaches:
further comprising: obtaining a maximum occupancy value for a specific reservation for the rental property, and sending an alert message to the remote computing device in response to the occupancy count exceeding the maximum occupancy value. (In C12; L32 – 44; Figs. 6 – 7: teaches an example wherein “If six guests is one of the detection parameters, then the camera system 104 may analyze video frames generated to count the number of the guests 70a-70n to determine if the number of guests is less than, equal to or greater than six guests. In the example shown, the extracted data 170a-170e about the number of the guests 70a-70e may indicate five guests are at the rental property 50. Since the number of guests is less than the amount in the detection parameters, then the camera system 104 may not indicate a breach has been detected. If more than six guests were detected, then the camera system 104 may generate the notification signal NTF to indicate that a breach of the rental agreement 122 has been detected.” Generally, the system does count the number of people at the rental property and share an occupancy message, for example, “each camera system 104 a-104 n may perform the computer vision operations to determine the number count 170 a-170 e of people and share the number count 170 a-170 e to determine a total number of occupants at the rental property” as further shown in Fig. 6 (see C16; L26 – 30). Also, refer to C11; L3-21 and L23-45 and C16; L12-37 for more examples of occupancy counts based on people entering or leaving a location.)
Regarding claim 7:
The combination of Day and Cinnamon, as shown in the rejection above, discloses the limitations of claim 1.
Day further teaches:
further comprising: acquiring a sound level for the rental property; in response to the sound level exceeding a predetermined threshold, sending an automated message to a remote computing device. (In C3; L58 – 67 and C7; L1-24 to C8; L5 – 24: discusses applying audio analysis to determine breaches of a rental agreement and send notifications to the owner and/or guest. Refer to C15; L25 – 52 for an example of noise levels breaching the contract and notifying the users.)
Regarding claim 8:
The combination of Day and Cinnamon, as shown in the rejection above, discloses the limitations of claim 7.
Day further teaches:
wherein the sound level is acquired from an interior location within the rental property. (In C3; L58 – 67 and C7;L1-24 to C8; L5 – 24: discusses applying audio analysis when acquiring audio and sound/noise levels to determine breaches of a rental agreement. Refer to C15; L59 – 67 for an example of noise levels detected in the interior of the rental property where the guest is sleeping.)
Regarding claim 9:
The combination of Day and Cinnamon, as shown in the rejection above, discloses the limitations of claim 7.
Day further teaches:
wherein the sound level is acquired from an exterior location within the rental property. (In C3; L58 – 67 and C7; L1-24 to C8; L5 – 24: discusses applying audio analysis to determine breaches of a rental agreement. Refer to C17; L25 – 28 for an example of noise levels detected in the surroundings of the rental property where the guests are holding a party (see C16; L45 – 57).)
Regarding claim 10:
The combination of Day and Cinnamon, as shown in the rejection above, discloses the limitations of claim 6.
Day further teaches:
further comprising: obtaining booking criteria; and performing a prebooking information request in response to the booking criteria. (In C6; L1-67; Figs. 9 – 10: teaches that “a rental request may be communicated to the server computers 102a-102n as the signal RENTREQ. The rental request signal RENTREQ may provide a list attributes that the renter is seeking in a rental property. Details of the rental request web interface 126 may be described in association with FIG. 9” as well as Fig. 10 to find close matches of a rental property meeting the renter’s needs/preferences to then agree with the property owner via a “rental agreement”. Refer to C3; L58-64 wherein “the web-based application may automatically check the requirements of the renter against the rules defined by the property owner. For example, the owner may specify the maximum number of people allowed, whether a pet is allowed, and whether loud music is allowed to be played. If the requirements of the renter fall within the rules of the owner, then a rental booking may be made”.)
Regarding claim 11:
The combination of Day and Cinnamon, as shown in the rejection above, discloses the limitations of claim 10.
Day further teaches:
wherein the booking criteria includes a rental property distance value below a predetermined distance. (In C6; L1-28: teaches that “the server computers 102 a-102 n may provide the rental listings 120 (e.g., as the signal WEB) that prospective renters may browse through and/or may use the input signal RENTREQ from the rental request to find properties that closest match what the prospective renter is searching for according to the parameters entered into the web interface 126. For example, the server computers 102 a-102 n may be configured to filter the available listings 120 based on the data provided in the signal RENTREQ (e.g., at a specific location, available at particular times, allows a particular number of guests, allows pets, etc.). For example, the server computers 102 a-102 n may provide a match-making service to enable property owners to find suitable renters and to enable prospective renters to find a suitable rental property.”)
Regarding claim 12:
The combination of Day and Cinnamon, as shown in the rejection above, discloses the limitations of claim 10.
Day further teaches:
wherein the booking criteria includes a difference between a maximum occupancy value the specific reservation and a current number of people in the property. (In C35; L50 – 64; Figs. 6 – 7: teaches an example wherein “In the example shown, the video processing pipeline of the processor 130 may detect a breach in the terms of the rental agreement 122 (e.g., too many people have been detected at the rental property 50). For example, the feature set may provide instructions for counting the number of people in the video frames 270a-270n, and the computer vision modules 260 may detect a greater number of visitors (e.g., 5) than the maximum allowable number of visitors in the rental agreement 122 (e.g., 3). The computer vision modules 260 may extract the data 170 that indicates the number of people in the video frames 270a-270n (and additional data according to the other detection parameters). In the example shown, the extracted data 170 may indicate a breach of the rental agreement 122.” Also, refer to C42; L38 – 50 for another example a “restriction input field” for the “maximum number of guests allowed” which further the visitors or guests will be detected before breaching the contract in place, once agreed and activated.)
Regarding claim 14:
The combination of Day and Cinnamon, as shown in the rejection above, discloses the limitations of claim 1.
Day further teaches:
wherein the machine learning system is configured and disposed to identify behavior, aggression and characteristics of one or more people in the field of view of the digital camera by analyzing compressed object tracks of the people. (In C10; L25 – 39; Fig. 2: teaches “a dotted box 162 is shown around the head of the person 70c” which represents “the camera system 104 detecting characteristics of the object 160c” wherein such characteristics may be analyzed “to determine what the objects 160a-160n are (e.g., classification), determine what the objects 160a-160n are doing (e.g., behavior analysis) and/or to distinguish one object from another object”, in accordance to ¶0063 from Applicant disclosure. Further, “the camera system 104 may be configured to determine a behavior of the objects 160a-160n” and “inferences may be made about the behavior of the objects 160a-160n based on the characteristics 162 detected”. In an example, “regular or rhythmic body movement may be determined to be dancing. The body movement may be compared to the audio data (e.g., music) to determine whether the behavior indicates dancing.” (see C11; L47 – 67). Refer to C31; L7 – 20 wherein “CNN module 240b may be configured to perform feature extraction and/or matching solely in hardware. Feature points typically represent interesting areas in the video frames (e.g., corners, edges, etc.). By tracking the feature points temporally, an estimate of ego-motion of the capturing platform or a motion model of observed objects in the scene may be generated. In order to track the feature points, a matching algorithm is generally incorporated by hardware in the CNN module 240b to find the most probable correspondences between feature points in a reference video frame and a target video frame. In a process to match pairs of reference and target feature points, each feature point may be represented by a descriptor (e.g., image patch, SIFT, BRIEF, ORB, FREAK, etc.)” that can be examples of compressed object tracks.)
Neither Day or Cinnamon explicitly teach the ability of analyzing a specific compressed object track, of the people to infer its behavioral characteristics. However, Lee teaches (i.e. relying on the provisional date of 11/26/2019 for Provisional Application 62/940,431):
…by analyzing compressed object tracks of the people; (In ¶0063 – 64; Fig. 3 (305, 306 and 308): teaches a “camera 302” that captures a “video 305” that “also includes image frames of the person 306 deviating from the path 308, and moving towards a side of the property 304”, in accordance to ¶0063 from Applicant disclosure. Further, in “FIG. 3, the monitoring server 330 receives image data 312” that includes “images of the person 306 approaching the front door 342 on the path 308, the person 306 deviating from the path 308, and the person approaching the side of the property 304.” As for inferring behavioral characteristics after analyzing an object’s compressed object track, this prior art teaches an example wherein “the monitoring server 330 assigns values less than 1.0 for the monitoring system data 314 indicating the system status of “armed,” the time 2:05 am, and the front door 342 locked. The system status, time of day, and front door status indicate that it is late at night, and the resident 334 is not likely expecting visitors. Therefore, the monitoring server 330 assigns a value of 0.90 to the system status of “armed,” 0.95 to the time of 2:05 am, and 0.95 to the front door 342 locked. These three monitoring system data points have the effect of lowering the effective threshold for detecting abnormal events. Thus, the monitoring server 330 is more likely to determine that the person 306 approaching the side of the property 304 is an abnormal event, based on these factors” and as shown in Fig. 3 along with their “path 308” (see ¶0068 – 70).)
It would have been obvious to one of ordinary skill in the art before the earliest effective filing date of the claimed invention to modify Day and Cinnamon to provide the ability of analyzing a specific compressed object track, of the people to infer its behavioral characteristics, as taught by Cinnamon in order to “accurately detect and classify events in order to send valid notifications to the residents” as well as “differentiate between normal and abnormal events” (¶0006; Lee), see also MPEP 2143.I.G. Further such ability provided by Lee into Day and Cinnamon systems would have been obvious because the claimed invention is merely applying a known technique to a known method ready for improvement to yield predictable results. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 406 (2007). In other words, all of the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results to one of ordinary skill in the art at the time of the invention (i.e., predictable results are obtained by applying the known technique of monitoring property using machine learning wherein the path of each person is monitored and stored to the known method and system for tracking objects such as humans on rental properties using image processing monitoring the movements and activities of objects to differentiate between normal and abnormal events). See also MPEP § 2143(I)(D).
Regarding claim 15:
The combination of Day and Cinnamon, as shown in the rejection above, discloses the limitations of claim 1.
Day further teaches:
wherein the machine learning system is configured and disposed to identify maintenance personnel entering and exiting the rental property. (In C12; L47 – 57: teaches that “the detection parameters may comprise duties and/or requirements of the property owner. For example, when the property owner is preparing the property 50 for the renter, the camera system 104 may provide a check that the property has provided all the amenities agreed to in the rental agreement 122 (e.g., left out clean towels and clean bedsheets, left out toiletries, etc.). In some embodiments, the camera system 104 may be configured to detect particular events that the property owner has agreed to respond to in the rental agreement 122 (e.g., fixing a water leak, replacing a broken appliance, etc.).”)
Regarding claim 17:
The combination of Day and Cinnamon, as shown in the rejection above, discloses the limitations of claim 16.
Day further teaches:
wherein the memory further includes instructions, that when executed by the processor, cause the electronic computation device to identify an object of interest that comprises a person. (In C11 – 12; L64 – 67 and L1 – 11: teaches that “the example shown, the characteristics 162 may correspond to a face of the person 70 c (e.g., the detected object 160 c). The characteristics 162 may be determined for each of the detected objects 160 a-160 e (e.g., the people 70 a-70 e, items held by the people 70 a-70 e, other items in the location 50, etc.). The characteristics 162 may comprise a color of the detected objects 160 a-160 e (e.g., color of clothing worn). The characteristics 162 may comprise the size of objects (e.g., a height of a person). The characteristics 162 may comprise a classification of the detected objects 160 a-160 e (e.g., recognizing the people 70 a-70 e as distinct people, identifying an item as a television, recognizing an animal, etc.). In some embodiments, the characteristics 162 may be used by the camera system 104 to distinguish between the detected objects 160 a-160 e.”)
Regarding claim 18:
The combination of Day and Cinnamon, as shown in the rejection above, discloses the limitations of claim 16.
Day further teaches:
wherein the memory further includes instructions, that when executed by the processor, cause the electronic computation device to identify an object of interest that comprises an item. (In C11 – 12; L64 – 67 and L1 – 11: teaches that “the example shown, the characteristics 162 may correspond to a face of the person 70 c (e.g., the detected object 160 c). The characteristics 162 may be determined for each of the detected objects 160 a-160 e (e.g., the people 70 a-70 e, items held by the people 70 a-70 e, other items in the location 50, etc.). The characteristics 162 may comprise a color of the detected objects 160 a-160 e (e.g., color of clothing worn). The characteristics 162 may comprise the size of objects (e.g., a height of a person). The characteristics 162 may comprise a classification of the detected objects 160 a-160 e (e.g., recognizing the people 70 a-70 e as distinct people, identifying an item as a television, recognizing an animal, etc.). In some embodiments, the characteristics 162 may be used by the camera system 104 to distinguish between the detected objects 160 a-160 e.”)
Regarding claim 19:
The combination of Day and Cinnamon, as shown in the rejection above, discloses the limitations of claim 17.
Day further teaches:
wherein the memory further includes instructions, that when executed by the processor, cause the electronic computation device to increment an occupancy count in response to detecting the person traversing the boundary towards the entrance. (In C12; L32 – 44; Figs. 6 – 7: teaches an example wherein “If six guests is one of the detection parameters, then the camera system 104 may analyze video frames generated to count the number of the guests 70a-70n to determine if the number of guests is less than, equal to or greater than six guests. In the example shown, the extracted data 170a-170e about the number of the guests 70a-70e may indicate five guests are at the rental property 50. Since the number of guests is less than the amount in the detection parameters, then the camera system 104 may not indicate a breach has been detected. If more than six guests were detected, then the camera system 104 may generate the notification signal NTF to indicate that a breach of the rental agreement 122 has been detected.” Generally, the system does count the number of people at the rental property and share an occupancy message, for example, “each camera system 104 a-104 n may perform the computer vision operations to determine the number count 170 a-170 e of people and share the number count 170 a-170 e to determine a total number of occupants at the rental property” as further shown in Fig. 6 (see C16; L26 – 30). Also, refer to C11; L3-21 and L23-45 and C16; L12-37 for more examples of occupancy counts based on people entering or leaving a location.)
Regarding claim 21:
The combination of Day and Cinnamon, as shown in the rejection above, discloses the limitations of claim 1.
Day further teaches:
further comprising: retrieving reservation data for the rental property, wherein the reservation data includes an occupancy limit; (In C12; L15 – 23; Figs. 6 – 7: teaches an example wherein “the rental agreement 122 may indicate a limitation on the number of guests allowed at the rental property 50. The detection engine 124 may convert the machine readable version of the rental agreement 122 into detection parameters that may be usable by the camera system 104 at the rental property 50 shown. The detection parameters may provide computer readable instructions about what types of objects and/or scenarios that the camera system 104 should detect at the rental property 50”. Also, refer to C3; L58-64 for more booking details.)
comparing the updated occupancy count to the occupancy limit; (In C12; L24 – 31; Figs. 6 – 7: teaches an example wherein “the rental agreement 122 may indicate a maximum of six guests, the detection engine 124 may query the camera system 104 to determine a format of the feature set for the camera system 104, the detection engine 124 may convert the guest limitation from the rental agreement 122 into the feature set, and the processor 130 of the camera system 104 may convert the feature set into detection parameters used to perform the computer vision operations. If six guests is one of the detection parameters, then the camera system 104 may analyze video frames generated to count the number of the guests 70a-70n to determine if the number of guests is less than, equal to or greater than six guests.”)
in response to the occupancy count exceeding the occupancy limit, generating an alert message; and sending the alert message to the remote computing device, wherein the alert message includes a date and time, and the updated occupancy count; (In C12; L41 – 44; Figs. 6 – 7: teaches in this same example that “If more than six guests were detected, then the camera system 104 may generate the notification signal NTF to indicate that a breach of the rental agreement 122 has been detected.”.)
wherein the alert message is sent to at least one of a property manager and a guest. (In C14; L22 – 27; Figs. 6 – 7: teaches “If the data detected by the camera systems 104 a-104 n matches any of the entries on the list of restrictions, the camera system 104 may generate a notification. The notification may be a warning to the people 70 a-70 n to cure the cause of the warning. The notification may be provided to the rental property owner.”)
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Gaudin (U.S. Patent No. 10453149 B1) is pertinent because it “relates to property costs and, more particularly to systems and methods for analyzing property telematics data to update risk-based coverage—for example, updating a homeowner's insurance policy—of a property.”
Marcheselli (U.S. Pub No. 20150221094 A1) is pertinent because it is “relates to the field of object detection, tracking, and counting. In specific, the present invention is a computer-implemented detection and tracking system and process for detecting and tracking human objects of interest that appear in camera images taken, for example, at an entrance or entrances to a facility, as well as counting the number of human objects of interest entering or exiting the facility for a given time period.”
Deros (U.S. Pub No. 20180110093 A1) is pertinent because it is an “invention provides an associated mobile application (referred to herein as an “app”) running on a mobile device to motivate guests to permit the mobile app to track guest location, habits, routines, patterns and behavior, both on and off the premises to thereby facilitate customizing the user experience and developing targeted and customized marketing and messaging programs with contextual awareness of the guest's activities.”
Dao-ping (CN Pub No. 106570641 A) is pertinent because it is about a system that collects facility operation data, customer behavior data, hotel business ecological data and real- time and accurate data for hotel service so as to perform business decision-making process in efficient and scientific manner.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Ivonnemary Rivera Gonzalez whose telephone number is (571)272-6158. The examiner can normally be reached Mon - Fri 9:00AM - 5:30PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nathan Uber can be reached at (571) 270-3923. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/IVONNEMARY RIVERA GONZALEZ/Examiner, Art Unit 3626
/NATHAN C UBER/Supervisory Patent Examiner, Art Unit 3626