Prosecution Insights
Last updated: April 19, 2026
Application No. 18/435,206

DOORBELL COMMUNITIES

Non-Final OA §103
Filed
Feb 07, 2024
Examiner
BECKER, JOSEPH W
Art Unit
2483
Tech Center
2400 — Computer Networks
Assignee
Skybell Technologies Ip LLC
OA Round
1 (Non-Final)
72%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
98%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
278 granted / 386 resolved
+14.0% vs TC avg
Strong +26% interview lift
Without
With
+25.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
22 currently pending
Career history
408
Total Applications
across all art units

Statute-Specific Performance

§101
4.1%
-35.9% vs TC avg
§103
56.9%
+16.9% vs TC avg
§102
17.9%
-22.1% vs TC avg
§112
8.9%
-31.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 386 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lipert et al. US 2016/0224837 in view of Fadell et al. US 2015/0156031 from IDS and Russell et al. US 2015/0145991 Lipert discloses: 1. A computer-readable, non-transitory, programmable product, comprising code, executable by a processor, for causing the processor to perform the steps of: causing image and data information to be stored on a first database (Fig. 4: databases e.g. 626; 0018: Server 620 stores user information, the extracted metadata, and video frames containing detected faces in relational database 626); causing a selection of the image and data information (Fig. 4; 0018: Server 620 processes the visual query, collecting user information, extracting metadata from the video footage, and running face detection software. Server 620 stores user information, the extracted metadata, and video frames containing detected faces in relational database 626. To aid in identifying detected faces, server 620 can cull the Clark County Police Department's database 628. Identification of any face detected is sent back to users 618, 620 and displayed on the screen of desktop computer (user system 618), augmenting the live footage being streamed; identification is occurring in real time. Identification of detected faces occurs quickly due to the heuristic use of metadata extracted from and associated with the image data obtained from the visual query (queries). While Gamblers Casino is compiling its own database 626, it can add additional remote databases (not illustrated) as storage requirements necessitate… The order in which databases 620, 628, 644, 646, 648 are accessed for identifying potential matching-candidates, and how the image data containing potential matching-candidates stored within databases 628, 644, 646, 648 are rank-ordered based on available metadata, is a neural network model, wherein system 110 determines the “best” way to use the available metadata to rank-order matching-candidates. Once rank-ordered, system servers 620, 642 employ facial recognition algorithms to the first rank-ordered candidate (i.e., most likely matching-candidate) before moving onto the 2.sup.nd, 3.sup.rd, 4.sup.th, 5.sup.th, . . . possible matching-candidates); causing a selection of a plurality of remote computing devices (Fig. 4: user app 624; 0018: Users systems 638, 640 are desktop computers and run application 624 according to an embodiment of the present invention, and again are monitored by security personnel of Gamblers casino. Application 624 on the desktop computers gathers information about user systems 638, 640, such as date, time, location, and transmits the user information and video footage to server 642 via network 660. Server 642 processes the user information, extracts metadata from the video footage, and stores user information, extracted metadata, and video frames containing detected faces in relational database 644. Additionally, server 642 can access the remote Washoe County Police database 646 as well as remote database 648, which contains image data and metadata associated with “friends” of the Gamblers Casino's Facebook page (that is where Facebook users have “friended” Gamblers Casino). As individuals are captured by any camera (612, 614, 616, 632, 634, 636) at either location (610, 630) application 224 of the present invention automatically (without prompting from users 618, 638, 640) attempts to identify any face detected from the images sent to servers 624, 642.); causing a selection of a subset of the image and data information (0018: Identification of any face detected is sent back to users 618, 620 and displayed on the screen of desktop computer (user system 618), augmenting the live footage being streamed; identification is occurring in real time. Identification of detected faces occurs quickly due to the heuristic use of metadata extracted from and associated with the image data obtained from the visual query (queries).); causing the selection of the subset of the image and data information to be selectively shared, over a wireless network, with the plurality of remote computing devices (Fig. 1: user application 126+, databases 120, 220, networks 112, 212, cameras 114+, servers 118+, user systems 124+; 0013: User system 124 can be any computing device with the ability to communicate through network 112, such as a smart phone, cell phone, a tablet computer, a laptop computer, a desktop computer, a server, etc.; Fig. 4; 0018); and causing one or more images, taken within a geographical area, to be compared and identified in connection with trait characterizations sourced from one or more of the plurality of remote computing devices (Fig. 4: cameras capturing areas 610 or 630; 0018-19); Lipert does not explicitly disclose the following, however Fadell teaches wherein each remote computing device from the plurality of remote computing devices is communicatively coupled through an associated network to an associated doorbell (Fig. 1: smart doorbell 106; Fig. 2: smart home environments 100, 100a-f). Therefore, it would have been obvious to a person having ordinary skill before the effective filing date to modify the reference(s) as above in order to obtaining, at a computing system from a doorbell smart device positioned outside a structure of a smart environment, a condition of a phenomenon outside the structure detected by the doorbell smart device, analyzing, with the computing system, the detected condition of the phenomenon obtained from the doorbell smart device in combination with another detected condition of the phenomenon, and automatically generating a report using the computing system based on the analyzing (Fadell 0005) Russell teaches and wherein causing the selection of the image and data information comprises causing a reception of an input by a user (Fig. 3; 0049: operation of the surveillance system "back end" based, in part, on inputs provided from a "front end" via the user interfaces. For example, a user can access the Status Board SB111-1 using a user interface and select one or more images to be designated as shared on the COP 113). Therefore, it would have been obvious to a person having ordinary skill before the effective filing date to modify the reference(s) as above in order to personalize and share image data retrieved from surveillance sources with multiple surveillance devices and multiple users and/or agencies (Russell 0042) 2. The computer-readable, non-transitory, programmable product, comprising code, executable by a processor, as recited in claim 1, for further causing the processor to perform analytics on the image and data information stored on the first database (0013: Remote relational database can store images received from cameras 114, 116, can store metadata extracted from images captured from cameras 114, 116, and can store visual query search results, and reference images captured from cameras 114, 116. Remote databases 122 can be accessed by server 118 to collect, link, process, and identifying image data and the images' associated metadata recorded by cameras 114, 116 at different times; Fig. 1, 3, 4). 3. The computer-readable, non-transitory, programmable product, comprising code, executable by a processor, as recited in claim 2, wherein the analytics include classification of the image and data information of the first database based on trends and/or abnormal behaviors (0013: Recognition system 110 will often be used as a core to a larger, proprietary analytical solution, and accordingly user application 126 is customizable depending on the needs of the user, such as identifying repeat customers in a retail setting, identifying known criminals at a border crossing, identifying the frequency a specific product occurs at a specific location, identifying product defects, or tracking product inventory). 4. The computer-readable, non-transitory, programmable product, comprising code, executable by a processor, as recited in claim 3, wherein the analytics consists of visit or visitor information to one or more locations; criminal history information associated with a visitor; comparison of database information on the first database with information stored on a second database; and combinations thereof (0013: Recognition system 110 will often be used as a core to a larger, proprietary analytical solution, and accordingly user application 126 is customizable depending on the needs of the user, such as identifying repeat customers in a retail setting, identifying known criminals at a border crossing, identifying the frequency a specific product occurs at a specific location, identifying product defects, or tracking product inventory; 0019: The system and method for collecting, linking, and processing image data to identifying faces or objects is not limited to situations where crime prevention or criminal detection is required. A retail store with locations throughout the Midwest United States might want to implement a new marketing campaign. Before implementing the campaign the store would like to identify the demographic breakdown of its patrons.; 0017: all image data containing detected faces are stored with all associated metadata in one or more relational databases; 0018: The order in which databases 620, 628, 644, 646, 648 are accessed for identifying potential matching-candidates, and how the image data containing potential matching-candidates stored within databases 628, 644, 646, 648 are rank-ordered based on available metadata, is a neural network model, wherein system 110 determines the “best” way to use the available metadata to rank-order matching-candidates; Figs. 1, 3, 4). 5. The computer-readable, non-transitory, programmable product, comprising code, executable by a processor, as recited in claim 1, for further causing the processor to cause unauthorized building intrusion alerts to be sent to the plurality of remote computing devices, and wherein the trait characterizations include a trait selected from the group consisting of behavior, identity or bodily characteristics (0012: Cameras 114, 116 can be used at stationary surveillance locations such as intersections, toll booths, public or private building entrances, bridges, etc.; 0016: an application that delivers security alerts to employees cellphones, an application that creates real-time marketing data, sending custom messages to individuals, an application for continuous improvement studies, etc.; 0019: collecting, linking, and processing image data to identifying faces or objects is not limited to situations where crime prevention or criminal detection is required; 0013: Recognition system 110 will often be used as a core to a larger, proprietary analytical solution, and accordingly user application 126 is customizable depending on the needs of the user, such as identifying repeat customers in a retail setting, identifying known criminals at a border crossing, identifying the frequency a specific product occurs at a specific location, identifying product defects, or tracking product inventory; Fig. 4; 0018-9). 6. The computer-readable, non-transitory, programmable product, comprising code, executable by a processor, as recited in claim 3, for further causing the processor to cause an alert to be sent to the plurality of remote computing devices in response to abnormal behaviors determined through the analytics (0016: an application that delivers security alerts to employees cellphones, an application that creates real-time marketing data, sending custom messages to individuals, an application for continuous improvement studies, etc.; 0019: collecting, linking, and processing image data to identifying faces or objects is not limited to situations where crime prevention or criminal detection is required; 0013: Recognition system 110 will often be used as a core to a larger, proprietary analytical solution, and accordingly user application 126 is customizable depending on the needs of the user, such as identifying repeat customers in a retail setting, identifying known criminals at a border crossing, identifying the frequency a specific product occurs at a specific location, identifying product defects, or tracking product inventory). 7. The computer-readable, non-transitory, programmable product, comprising code, executable by a processor, as recited in claim 1, wherein the code causes the processor to cause selected image and data information to be shared, over the wireless network, with only selected ones of the plurality of remote computing devices, as determined by one or more members of a user group defined by the plurality of remote computing devices (0011: In many applications a private network, which may use Internet protocol, but is not open to the public will be employed. Cameras 114 and 116 are connected to network 112, as is server 118, remote database 120, and user 124. The peer-to-peer architecture allows additional cameras, servers, and users to be added to recognition system 110 for quick expansion.; 0014: Server 118 and second server 218 can communicate through a private or public wireless (or wired) connection 230, allowing two different physical locations to share image data. In the present invention various sources of image data are shared and stored at different physical locations and accordingly potential image matches comprise images captured from more than one image source, and are stored in more than one physical location. Potential image matches/matching-candidates could be contained in private databases comprised of historical data compiled by the user or could be gleaned from private or public databases from which the user has been granted access (e.g. local, state, or federal law enforcement databases, Facebook, LinkedIn, etc.); Fig. 3: has image data 520, meta data, and returns results to the user at 590). 8. The computer-readable, non-transitory, programmable product, comprising code, executable by a processor, as recited in claim 1, wherein the image and data information includes visitor time of arrival, unauthorized building intrusion information, and trait information (0012: Cameras 114, 116 can be used at stationary surveillance locations such as intersections, toll booths, public or private building entrances, bridges, etc.; 0016: an application that delivers security alerts to employees cellphones, an application that creates real-time marketing data, sending custom messages to individuals, an application for continuous improvement studies, etc.; 0019: collecting, linking, and processing image data to identifying faces or objects is not limited to situations where crime prevention or criminal detection is required; 0018: Application 224 gathers information about user system 618, such as date, time, location, etc., and transmits the user information and video footage to server 620 via network 622. Server 620 processes the visual query, collecting user information, extracting metadata from the video footage, and running face detection software. Server 620 stores user information, the extracted metadata, and video frames containing detected faces in relational database 626. To aid in identifying detected faces, server 620 can cull the Clark County Police Department's database 628. Identification of any face detected is sent back to users 618, 620 and displayed on the screen of desktop computer (user system 618), augmenting the live footage being streamed; 0013: Recognition system 110 will often be used as a core to a larger, proprietary analytical solution, and accordingly user application 126 is customizable depending on the needs of the user, such as identifying repeat customers in a retail setting, identifying known criminals at a border crossing, identifying the frequency a specific product occurs at a specific location, identifying product defects, or tracking product inventory… . Remote databases 122 can be accessed by server 118 to collect, link, process, and identifying image data and the images' associated metadata recorded by cameras 114, 116 at different times.). 9. The computer-readable, non-transitory, programmable product, comprising code, executable by a processor, as recited in claim 1, wherein data information includes contact information associated with the plurality of remote computing devices (0014: Potential image matches/matching-candidates could be contained in private databases comprised of historical data compiled by the user or could be gleaned from private or public databases from which the user has been granted access (e.g. local, state, or federal law enforcement databases, Facebook, LinkedIn, etc.)). 10. The computer-readable, non-transitory, programmable product, comprising code, executable by a processor, as recited in claim 9, Lipert does not explicitly disclose the following, however Russell teaches wherein the contact information includes selected ones of name, phone number and e-mail address information (0083: The surveillance data structure SDS could include associated data that is associated with each individual image and/or a collection of images from a common overall source (e.g., a user). This associated data could include a user data structure UDS as well as a device data structure DDS. The user data structure UDS could include information related to a user including, but not limited to, a user first name, last name, email address, one or more additional email addresses (e.g., secondary email address), one or more home mailing addresses, and/or one or more phone numbers (e.g., primary phone, work cell phone, personal cell phone). Additional information in the user data structure UDS could include organization information related to the user's place of employment (e.g., company, supervisor, contact information of supervisor including an email address) as well as other information relevant to the user (e.g., user location information including a time zone the user is normally or currently present).). Therefore, it would have been obvious to a person having ordinary skill before the effective filing date to modify the reference(s) as above in order to let further information that could be associated with each image and/or a collection of images together taken from the different sensors (Russell 0083) 11. The computer-readable, non-transitory, programmable product, comprising code, executable by a processor, as recited in claim 1, Lipert does not explicitly disclose the following, however Fadell teaches wherein the data information includes video and audio information (0243: an important underlying functionality of the smart doorbell 106 is to serve as a home entryway interface unit, providing a doorbell functionality (or other visitor arrival functionality), audio/visual visitor announcement functionality, and like functionalities.). Therefore, it would have been obvious to a person having ordinary skill before the effective filing date to modify the reference(s) as above in order to obtaining, at a computing system from a doorbell smart device positioned outside a structure of a smart environment, a condition of a phenomenon outside the structure detected by the doorbell smart device, analyzing, with the computing system, the detected condition of the phenomenon obtained from the doorbell smart device in combination with another detected condition of the phenomenon, and automatically generating a report using the computing system based on the analyzing (Fadell 0005). 12. The computer-readable, non-transitory, programmable product comprising code, executable by a processor, as recited in claim 1, Lipert does not explicitly disclose the following, however Fadell teaches wherein the image and data information includes sensor information received at a plurality of doorbell locations (Fig. 1: smart doorbell 106; Fig. 2: smart home environments 100, 100a-f). Therefore, it would have been obvious to a person having ordinary skill before the effective filing date to modify the reference(s) as above in order to obtaining, at a computing system from a doorbell smart device positioned outside a structure of a smart environment, a condition of a phenomenon outside the structure detected by the doorbell smart device, analyzing, with the computing system, the detected condition of the phenomenon obtained from the doorbell smart device in combination with another detected condition of the phenomenon, and automatically generating a report using the computing system based on the analyzing (Fadell 0005). 13. The computer-readable, non-transitory, programmable product, comprising code, executable by a processor, as recited in claim 4, wherein the visit or visitor information consists of a number of people at a location during a period of time; a time a person or persons spends at a location; a number of locations a person or persons visits; and a combination thereof (0019: A retail store with locations throughout the Midwest United States might want to implement a new marketing campaign. Before implementing the campaign the store would like to identify the demographic breakdown of its patrons. The customizable system and method of the present invention would be tailored not to identify the individuals captured by security cameras, but to simply return results of the sex and age of shoppers, the date and location of the store visited, time of visit, etc. to store management. The results would not be returned in an augmented reality format as discussed in regards to FIG. 4, where identifying information is displayed directly on the video footage, but would be received in a spreadsheet format allowing the data to be easily sorted. The retail store could not only search and build their own databases, but also access social networking databases such as Facebook and/or LinkedIn to help in determining the sex and age of the shoppers. The metadata extracted from the image data would be used heuristically, rank-ordering potential matching-candidates, before combing different data sets, and linking different types of information to create a profile—linking social networking habits with biometric data, creating a database for innumerable business opportunities. Candidates having metadata associated with a home location not in the Midwest, would be moved to the bottom of the rank-ordering, and would most likely not be reported to the user.; ). Fadell additionally teaches wherein the visit or visitor information consists of a number of people at a location during a period of time; a time a person or persons spends at a location; a number of locations a person or persons visits; and a combination thereof (0105: For example, the data collected and logged may include maps of homes, maps of users' in-home movements from room to room as determined by network-connected smart devices equipped with motion and/or identification technology, time spent in each room, intra-home occupancy maps that indicate which rooms are occupied and by whom at different times; Fig. 2: paths between houses; 0459: analyze a tracked path of a visitor for making inferences regarding the likelihood of a predicted future path for that visitor, and to adjust functionalities of a smart environment based on such tracking and predicting. For example, in response to detecting a suspected criminal at environment 100a at a first location LA at a first time, platform 200 may inform law enforcement entity 222 of the specifics of that detection and/or may adjust the functionality of one or more environments within a certain distance DD of location LA, such as environment 100b at location LB and environment 100c at location LC and environment 100 at location LU (e.g., by increasing a security level of one or more smart devices of environment 100), but not remote environments 100d, 100e, or 100f that may be beyond distance DD of detecting location LA. However, if the next time that same suspected criminal is detected by platform 200 is at environment 100c at location LC at a second time, then platform 200 may be operative to determine that the tracked path of the suspected criminal (e.g., the direction of tracked path TPA) is moving farther away from location LU of environment 100 and, in response to that determination, platform 200 may be operative to once again adjust the functionality of environment 100, but this time by reverting the functionality back to its previous settings (e.g., by reducing a security level of one or more smart devices of environment 100).). Therefore, it would have been obvious to a person having ordinary skill before the effective filing date to modify the reference(s) as above in order to obtaining, at a computing system from a doorbell smart device positioned outside a structure of a smart environment, a condition of a phenomenon outside the structure detected by the doorbell smart device, analyzing, with the computing system, the detected condition of the phenomenon obtained from the doorbell smart device in combination with another detected condition of the phenomenon, and automatically generating a report using the computing system based on the analyzing (Fadell 0005) 14. The computer-readable, non-transitory, programmable product, comprising code, executable by a processor, as recited in claim 12, Lipert does not explicitly disclose the following, however Fadell teaches wherein the sensor information consists of motion detection; proximity detection; burglar alarm output; fire detection; smoke detection; and a combination thereof (0326: Head unit 804 of smart doorbell 106 may additionally or alternatively include any number of any suitable sensor components 828, such as any suitable temperature sensor, any suitable humidity sensor, any suitable occupancy sensor, any suitable ambient light sensor (ALS), any suitable fire sensor, any suitable smoke sensor, any suitable carbon monoxide (CO) sensor, any suitable proximity sensor, any suitable passive infrared (PIR) or other motion sensor, any suitable ultrasound sensor, any suitable still or video camera or scanner (e.g., a charge-coupled device (CCD) camera, a complementary metal-oxide-semiconductor (CMOS) camera, or any suitable scanner (e.g., a barcode scanner or any other suitable scanner that may obtain identifying information from a code, such as a linear barcode, a two-dimensional or matrix barcode (e.g., a quick response ("QR") code), a three-dimensional barcode, or the like), any suitable near-field communication (NFC) technique sensor (e.g., for sensing an entity wearing an infrared or NFC-capable smartphone or other suitable element), any suitable RFID technique sensor (e.g., for sensing an entity wearing an RFID bracelet, RFID necklace, RFID key fob, or other suitable element), any suitable audio sensor (e.g., a microphone, which may operate in conjunction with an audio-processing application that may be accessible to doorbell 106 (e.g., at environment 100, system 164, or other accessible component(s) of platform 200), which may identify a particular voice or other specific audio data for authenticating a user or for any other suitable purpose)), any suitable biometric sensor (e.g., a fingerprint reader or heart rate sensor or facial recognition sensor or any other feature recognition sensor, which may operate in conjunction with a feature-processing application that may be accessible to doorbell 106 (e.g., at environment 100, system 164, or other accessible component(s) of platform 200), for authenticating a user or for any other suitable purpose), and the like. A rechargeable battery 832 or any equivalently capable onboard power storage medium may also be provided by head unit 804. For example, battery 832 can be a rechargeable Lithium-Ion battery. In operation, smart doorbell 106 may charge battery 832 during time intervals in which the hardware power usage is less than what power stealing can safely provide, and may discharge to provide any needed extra electrical power during time intervals in which the hardware power usage is greater than what power stealing can safely provide, which may thereby extract power as needed from the 120V "hot" line voltage wire or other suitable power source. Battery 832 may be used as a conventional back-up source or as a reservoir to supply excess DC power if needed for short periods.; 0134: , in the event one or more burglars enter the home carrying on their person their mobile devices (e.g., smart phones), the network-enabled smart home devices, upon detecting the home-invasion condition, automatically "interrogate" a burglar's mobile device to try and extract as much useful information as possible about the burglar including, but not limited to, the MAC address of their phone, their cell number, and/or anything else that their mobile device will divulge about itself or the burglar. In addition, an alarm message could be sent to the occupant's mobile device 166 and also to a security service (or police, etc.) containing some or all of this information. According to embodiments, the smart-home environment 100 and/or the security service that monitors the smart-home environment can automatically connect with a wireless telephone carrier to determine which mobile devices are currently communicating with the cell tower(s) nearest the burglarized home. The wireless telephone carriers could automatically generate a "suspect list" that would necessarily include the burglar's mobile device.). Therefore, it would have been obvious to a person having ordinary skill before the effective filing date to modify the reference(s) as above in order to obtaining, at a computing system from a doorbell smart device positioned outside a structure of a smart environment, a condition of a phenomenon outside the structure detected by the doorbell smart device, analyzing, with the computing system, the detected condition of the phenomenon obtained from the doorbell smart device in combination with another detected condition of the phenomenon, and automatically generating a report using the computing system based on the analyzing (Fadell 0005) 15. The computer-readable, non-transitory, programmable product, comprising code, executable by a processor of claim 1, wherein the data information comprises visitor information (0018-9: video cameras 632, 634, and 636 monitor gamers... the store would like to identify the demographic breakdown of its patrons). 16. The computer-readable, non-transitory, programmable product, comprising code, executable by a processor of claim 15, Lipert does not explicitly disclose the following, however Fadell teaches for further causing the processor to cause the visitor information to be compared with stored solicitor information (0437: As another example, if a solicitor visitor has been detected (e.g., an entity with an identified intention to solicit something from a system user of environment 100), platform 200 may be operative to communicate any suitable message to the solicitor). Therefore, it would have been obvious to a person having ordinary skill before the effective filing date to modify the reference(s) as above in order to obtaining, at a computing system from a doorbell smart device positioned outside a structure of a smart environment, a condition of a phenomenon outside the structure detected by the doorbell smart device, analyzing, with the computing system, the detected condition of the phenomenon obtained from the doorbell smart device in combination with another detected condition of the phenomenon, and automatically generating a report using the computing system based on the analyzing (Fadell 0005). 17. The computer-readable, non-transitory, programmable product, comprising code, executable by a processor of claim 15, Lipert does not explicitly disclose the following, however Fadell teaches for further causing the processor to cause visitor information to be tagged as solicitor information (0437: As another example, if a solicitor visitor has been detected (e.g., an entity with an identified intention to solicit something from a system user of environment 100), platform 200 may be operative to communicate any suitable message to the solicitor; 0457; it would be obvious that at some point the solicitor needs to be identified/tagged as a solicitor for these security networks to have stored information on solicitors). Therefore, it would have been obvious to a person having ordinary skill before the effective filing date to modify the reference(s) as above in order to obtaining, at a computing system from a doorbell smart device positioned outside a structure of a smart environment, a condition of a phenomenon outside the structure detected by the doorbell smart device, analyzing, with the computing system, the detected condition of the phenomenon obtained from the doorbell smart device in combination with another detected condition of the phenomenon, and automatically generating a report using the computing system based on the analyzing (Fadell 0005). 18. The computer-readable, non-transitory, programmable product, comprising code, executable by a processor of claim 15, for further causing the processor to cause visitor information to be tagged as matching stored visitor information (0014; 0018-9). 19. The computer-readable, non-transitory, programmable product, comprising code, executable by a processor of claim 15, Lipert does not explicitly disclose the following, however Fadell teaches for further causing the processor to send a match alert to a remote computing device when visitor information is matched with stored visitor information (0502). Therefore, it would have been obvious to a person having ordinary skill before the effective filing date to modify the reference(s) as above in order to obtaining, at a computing system from a doorbell smart device positioned outside a structure of a smart environment, a condition of a phenomenon outside the structure detected by the doorbell smart device, analyzing, with the computing system, the detected condition of the phenomenon obtained from the doorbell smart device in combination with another detected condition of the phenomenon, and automatically generating a report using the computing system based on the analyzing (Fadell 0005). Allowable Subject Matter Claim 20 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base and any intervening claim(s). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSEPH W BECKER whose telephone number is (571)270-7301. The examiner can normally be reached flexible usually 10-6. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Joseph G Ustaris can be reached on 5712727383. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JOSEPH W BECKER/Examiner, Art Unit 2483
Read full office action

Prosecution Timeline

Feb 07, 2024
Application Filed
Nov 14, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598313
EXTENDED LOW-FREQUENCY NON-SEPARABLE TRANSFORM (LFNST) DESIGNS WITH WORST-CASE COMPLEXITY HANDLING
2y 5m to grant Granted Apr 07, 2026
Patent 12556684
VIDEO CODING WITH GUIDED SEPARATE POST-PROCESSING STEPS
2y 5m to grant Granted Feb 17, 2026
Patent 12526394
MULTI-VIEW DISPLAY DEVICE
2y 5m to grant Granted Jan 13, 2026
Patent 12519985
Method of Coding and Decoding Images, Coding and Decoding Device and Computer Programs Corresponding Thereto
2y 5m to grant Granted Jan 06, 2026
Patent 12519973
SYSTEMS AND METHODS FOR PERFORMING MOTION COMPENSATION FOR BI-PREDICTION IN VIDEO CODING
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
72%
Grant Probability
98%
With Interview (+25.6%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 386 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month