Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s response to the last Office Action, filed 6/2/2025, has been entered and made of record.
Claims 1-20 are currently pending.
Applicants arguments filed 9/4/2025 have been fully considered but they are not persuasive.
Applicant argues that Agarwal is silent on determining whether the physical object has been physically delivered to a physical address of the presenter and therefore deemed to be physically in possession of the presenter. Applicant argument have been fully considered but are moot in view of the new ground(s) of rejection necessitated by the amendments. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Nidmarthi et al (US 10,068,262) and Madden et al ( US 2019/0043326)
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Nidmarthi et al ( US 10,068,262) in view of Madden et al (US 2019/0043326)
As to claim 1, Nidmarthi et al teaches the method comprising: determining that a physical object is to be presented, by a presenter, in a streaming video;
identifying a set of physical objects that are confirmed as having been physically delivered to a physical address of the presenter and therefore deemed to be physically in possession of the presenter; ( column 5, lines 38 - 40: "After a delivery process 114, such as transport with a commercial delivery service, a recipient possesses a physical package at receipt 116");
accessing an image recognition model that is trained using the set of physical objects and therefore deemed to be physically in possession of the presenter, (e., FIG. 6, Application interface 505 of mobile device 500 displays contents information 600 on touch screen 520. Contents information 600 is a data structure indicating contents of the physical package. In the example presented in FIG. 6, contents information 600 is presented as an image portraying a product included in the physical package (not shown). In alternative embodiments, contents information 600 is a list of products or one or more specification sheets for one or more products. In addition to contents information 600, application interface 505 includes a subsequent function selection interface 605, col. 17, line 65 to col. 18, 22). );
applying the image recognition model to the streaming video (figure 9); determining, based on output from the image recognition model, whether the streaming video includes the physical object ( column 6, lines 11 - 14: "Mobile computing device 118 receives over wireless network 122 from server 104 an item of transaction information 126"; column 7, lines 1 - 9: "User interaction functions that can be performed by mobile computing device 118 include sending a damage report or gathering feedback about recipient satisfaction with physical package at receipt 116. In such an embodiment, recipient reaction information 124 includes an indication of recipient satisfaction and can, at the option of the user, include detailed information such as a photograph taken with a camera of mobile computing device 118, showing damage 128 on physical package at receipt 116"; column 19, lines 25 - 42: “opportunity to photograph a physical package using a camera associated with mobile computing device"; figure 9)) ; in response to determining that the physical object is part of the set of physical objects, permitting the physical object to be presented in the streaming video(column 6, lines 6 - 11: "Mobile device 118 transmits over a wireless network 122 to server 104 on network 106 recipient reaction information 124). While Nidmarthi teaches the limitation above, Nidmarthi fails to teach “ in response to determining the physical object is not part of the set of physical objects and therefore deemed not to be physically in possession of the presenter, blocking the physical object from being presented in the streaming video. “ However, Madden the preceding limitation is known in the art of communications. Madden teaches that the portable camera 56 may communicate with third party servers 180 associated with a delivery at the property 102 to confirm the delivery schedule, update the delivery activities, and send the videos captured by the portable camera 56. ([0018]). Madden teaches further the portable camera 56 may be activated when the control unit 110 determines that the portable camera 56 corresponds to the preset delivery schedule, and deactivated when the visitor 50 leaves the property 102. In some cases, the visitor 50 may manually activate/deactivate the portable camera 56. Based on the preceding disclosure, it is inherent that an application is integrated in the camera for performing the functions of activating and deactivating the camera without human intervention ([0083], [0098], [0121]). Madden clearly teaches the control unit 110 may determine an operation status of the portable camera 56 based on the video recorded by the portable camera 56 before providing access to the property 102. For instance, the control unit 110 may determine whether the portable camera 56 is occluded based on the video recorded by the portable camera 56. If the visitor 50 or some other object (e.g., a cover of the portable camera 56) blocks the portable camera 56, the video recorded by the portable camera 56 may include a black image or blank pixels. In this example, the control unit 110 may determine that the portable camera 56 is not in a normal status, and may not provide access to the property 102( paragraph [0093])). It would have been obvious to one skilled in the art before filing of the claimed invention to have implemented the technique of Madden within the system of Nidamarthi in order to allow the customer to remotely activate home automation components to assist a task that a delivery person is planned to perform at the location of delivery. Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
As to claim 2,Nidmarthi et al teaches the method of claim 1, wherein identifying the set of physical objects occurs responsive to determining that the physical object is to be presented in the streaming video(read on users interaction/transaction on social media, column 8, line 56 - column 9, line 20; figure 1A: social network 140).
As to claim 3, Nidmarthi et al teaches the method of claim 1, wherein the set of physical objects are additionally confirmed as still being present at the physical address of the presenter (communication taking place between the physical object provider and a delivery service this communication implying a dedicated protocol, col. 8 line 22-40).
As to claim 4, Madden et al teaches the method of claim 1, wherein the image recognition model is trained by: generating a training set using images of the set of physical objects as labeled by a positive outcome ( the recognition model may include a neural network that is trained to classify or recognize objects such as humans, animals, vehicles, or text. In some examples, the recognition model may also include an optical character recognition (OCR) function to recognize characters included in an image of the visitor 50, an image of the package 54, or an image of a vehicle that the visitor 50 rode to the property 102. The system 100 may utilize the neural network, the OCR function, or a combination of the neural network and the OCR function to identify a location of the visitor 50 based on the image); and training the image recognition model to detect objects of the set of physical objects using the training set the recognition model may also include an optical character recognition (OCR) function to recognize characters included in an image of the visitor 50, an image of the package 54, or an image of a vehicle that the visitor 50 rode to the property 102. The system 100 may utilize the neural network, the OCR function, or a combination of the neural network and the OCR function to identify a location of the visitor 50 based on the image, paragraph [0089]).
As to claim 5, Nidmarthi et al teaches the method of claim 1, wherein the image recognition model is trained to detect a given object of the set of physical objects responsive to determining that the given object has been delivered to the physical address of the presenter(FIG. 6, Application interface 505 of mobile device 500 displays contents information 600 on touch screen 520. Contents information 600 is a data structure indicating contents of the physical package. In the example presented in FIG. 6, contents information 600 is presented as an image portraying a product included in the physical package (not shown). In alternative embodiments, contents information 600 is a list of products or one or more specification sheets for one or more products. In addition to contents information 600, application interface 505 includes a subsequent function selection interface 605, col. 17, line 65 to col. 18, 22)). .
As to claim 6,Nidmarthi et al teaches the method of claim 5, wherein the image recognition model is retrained to avoid detecting the given object responsive to determining that the given object has been delivered away from the physical address of the presenter ( Mobile computing device 118 receives over wireless network 122 from server 104 an item of transaction information 126"; column 7, lines 1 - 9: "User interaction functions that can be performed by mobile computing device 118 include sending a damage report or gathering feedback about recipient satisfaction with physical package at receipt 116. In such an embodiment, recipient reaction information 124 includes an indication of recipient satisfaction and can, at the option of the user, include detailed information such as a photograph taken with a camera of mobile computing device 118, showing damage 128 on physical package at receipt 116"; column 19, lines 25 - 42: “opportunity to photograph a physical package using a camera associated with mobile computing device"; figure 9).
As to claim 7, Madden et al teaches the method of claim 1, wherein accessing the image recognition model comprises accessing a plurality of image recognition models, each trained to detect one object of the set of physical objects, and wherein applying the image recognition model to the streaming video comprises applying each of the plurality of image recognition models to the streaming video( image recognition model, paragraph [0089] ; Madden teaches that the portable camera 56 may communicate with third party servers 180 associated with a delivery at the property 102 to confirm the delivery schedule, update the delivery activities, and send the videos captured by the portable camera 56. ([0018]). Madden teaches further the portable camera 56 may be activated when the control unit 110 determines that the portable camera 56 corresponds to the preset delivery schedule, and deactivated when the visitor 50 leaves the property 102. In some cases, the visitor 50 may manually activate/deactivate the portable camera 56. Based on the preceding disclosure, it is inherent that an application is integrated in the camera for performing the functions of activating and deactivating the camera without human intervention ([0083], [0098], [0121]).
As to claim 8, Nidmarthi et al teaches the teaches the method of claim 7, wherein the plurality of image recognition models has models added to it responsive to their corresponding physical objects being delivered to the physical address of the presenter, and wherein the plurality of image recognition models has models removed from it responsive to their corresponding physical objects being delivered away from the physical address of the presenter(column 6, lines 11 - 14: "Mobile computing device 118 receives over wireless network 122 from server 104 an item of transaction information 126"; column 7, lines 1 - 9: "User interaction functions that can be performed by mobile computing device 118 include sending a damage report or gathering feedback about recipient satisfaction with physical package at receipt 116).
As to claim 9, Nidmarthi et al teaches the method of claim 1, wherein the streaming video is streamed to users by way of a provider of the physical object, and wherein identifying the set of physical objects that are confirmed as having been delivered to a physical address of the presenter comprises: determining, from a profile of the presenter stored in a database of the provider of the physical object, a plurality of physical objects ordered to the physical address of the presenter; determining which of the plurality of physical objects ordered to the physical address of the presenter were delivered to the physical address of the presenter; and assigning the determined ones of the physical objects to the set of physical objects ( column 9 lines 6-22 and column 11, lines 1-40)
As to claim 10, Nidamarthi et al teaches the method of claim 1, further comprising: receiving, by way of a dedicated application protocol interface established between the physical object provider and a delivery service responsible for delivery of the physical object to the physical address of the presenter, a notification that a parcel was delivered to the physical address of the presenter; and correlating the notification to the determined ones of the plurality of physical objects ( (col. 12, lines 11-40; abeling associated with the QR codes indicates to the user that the scanning of a particular QR code from among the multiple QR codes provides information with respect to user satisfaction regarding the received package, col. 15, lines 11-20).
As to claim 11, Nidamarthi et al teaches the method of claim 10, further comprising: receiving, by way of the dedicated application protocol interface, a return notification indicating that a return parcel was retrieved from the physical address of the presenter; correlating the return notification to a subset of the set of physical objects; and modifying the set of physical objects to exclude the subset ( Some embodiments support the use of transaction information 126 as a means for a user of mobile computing device 118 to indicate that the user is preparing to return physical package at receipt 116 to a shipper and provide systems for facilitating that return, such as auto-produced return shipping labels. Such an indication that a user intends to return physical package at receipt 116 to a shipper may automatically trigger the presentation to a user of mobile computing device 118 of an interface for sending a request for customer service contact 130. An interface for facilitating a request for customer service contact related to the transaction is discussed below with respect to FIG. 10. In some embodiments, mobile computing device 118 allows a user to include any or all items of recipient reaction information 124 included in feedback about recipient satisfaction with physical package at receipt 116, described above, as part of request for customer service contact 130, column 8 lines 19-40)
As to claim 12, Madden et al teaches the method of claim 1, wherein, further in response to determining that the physical object is part of the set of physical objects, tagging a portion of the streaming video where the physical object is presented, and wherein, during a replay of the streaming video, a selectable option corresponding to the physical object is displayed that, when selected, causes the replay of the streaming video to jump to the portion of the video( the recorded videos and images may be tagged with various milestones of the process to enable quick review. For instance, the user (e.g., customer, delivery provider) may jump to a video captured at the package drop-off to quickly verify the delivery was complete. For example, the visitor 50 may press a button on the portable camera 56 to indicate that the visitor 50 is about to leave the package on the ground in the property 102 using the portable camera 56. In other examples, the monitoring server 160 may analyze data from the cameras 130 and use object recognition to detect when the visitor 50 places the package 54 on a surface and tag that moment as the package drop-off milestone, paragraph [0032]).
The limitation of claims 13-20 has been addressed above.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Contact information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NANCY BITAR whose telephone number is (571)270-1041. The examiner can normally be reached Mon-Friday from 8:00 am to 5:00 p.m..
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mr. Nay Maung can be reached at 571-272-7882. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
NANCY . BITAR
Examiner
Art Unit 2664
/NANCY BITAR/Primary Examiner, Art Unit 2664