Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Applicant’s request for continuing examination 12/16/2025 has been entered.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-6 and 11-13 are rejected under 35 U.S.C. 103 as being unpatentable over Davis et al. US 2020/0285836 in view of Carson et al. US 2022/0351525 in further view of Deluca et al. 2023/0360031.
Regarding claims 1, 11 and 12 Davis discloses a management system comprising:
a blockchain network connected to a device (element 30 in figure 2A and see paragraph 22), wherein the device includes a processor; and a memory storing a program which, when executed by the processor, causes the device to execute recognition processing on acquired image data (facial images acquired and recognized in paragraph 20), hold attribute information related to capturing of the image data (user name and password), and convert the image data into a hash value (facial images and attributes and converted into hash values in paragraph 29) , and the management system [inks, as linked data, the hash value converted by the edge device, the result of the recognition processing by the device, and the attribute information held by the edge device with one another, and registers the linked data in a distributed ledger in the blockchain network (paragraph 22 which links hash face classification, attributes in IPFS distributed ledger).
Davis discusses having servers do computation in paragraph 22 but fails to specifically call the servers “edge servers”. Carson however, shows that it is well known to have edge servers capture and detect/recognize objects at edge servers (paragraph 13 and 139) as well as Hash the data in paragraph 24. Therefore, it would have been obvious before the effective filing date to combine the system of Davis and Carson to allow for distributed and secure network transmission of critical image data from edge servers all over.
Although Davis and Carson do not specifically teach that an NFT is issued, Deluca et al. does. Further Deluca teaches that upon issuing of the NFT, the NFT will include as metadata i) hash values (35), iii) attribute information and the program will causes devices on the public blockchain network mining for information registered in the distribution ledger (para 36). The block chain video of all three references could be considered non fun-fungible tokens in that they are equivalent to what applicant disclosed as an NFT. Therefore, since all three systems (Davis, Carson and Deluca) use distributed networks to communicate token image data, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to include hashed attributes and the image data to provide more information and allow for better tracking of ownership of digital video assets. It should also be noted that upon issuing digital tokens ledgers are mined and updated by blockchain activities. So, it does not appear that applicant has claimed any novel steps.
Regarding claim 2, Carson et al teach the management system according to claim 1, wherein the recognition processing is processing to inspect an external view of an object (Paragraph 141 where vehicles and licenses plates are viewed), and the attribute information includes process information of the inspection (are the vehicles subject to the traffic congestion pricing policy).
Regarding claim 3, Carson discloses the management system according to claim 1, wherein the recognition processing is processing to recognize an object while a vehicle is travelling, and the attribute information includes vehicle information related to a travelling of the vehicle (Again paragraph 141 where moving vehicles are travelling and may be subject to the traffic congestion pricing policy).
Regarding claim 4, Davis teaches the management system according to claim 1, wherein the recognition processing is processing to recognize a person of which image is captured by a monitoring camera (paragraph 26) but does not specifically disclose the attribute information includes installation information of the monitoring camera. However, Carson does (see 56-58 where the camera is installed in a moving vehicle and its location is considered an attribute). Since both systems are used for distributed image analysis over a networked environment, it would have been obvious to one of ordinary skill in the art before the effective filing date, to combine the networks of Davis and Carson to allow for a more secure and functional object tracking system.
Regarding claim 5, Carson discloses the management system according to claim 4, wherein the recognition processing is processing to recognize the object entering or exiting an area (vehicle enters into an enforcement zone paragraph 56).
Regarding claim 6, Carson teaches the management system according to claim 1, wherein the image data is image data captured by a digital camera, and the attribute information includes owner information of the digital camera (cameras that communicate over the networks listed in paragraph 51 include who the phone is registered to and thus its owner (caller ID or registered owner)).
As for claim 13, note figure 4 of Deluca which does not send the NFT until after verified in step 420.
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Davis et al. US 2020/0285836 in view of Carson et al. US 2022/0351525 in view of Deluca et al. 2023/0360031 in further view of Bertsch et al (11609950).
As shown above Davis, Carson and Deluca teach all of the elements of the claims above, however they do not specifically teach acquiring audio information.
As for claim 7, Bertsch in col.31-col. 32 which shows acquire audio data related to the image data (col.31 lines 45-50); execute recognition processing on the audio data (convert speech to text Col.32, lines 14-18); hold attribute information related to the audio data (meta data)); convert the image data and the audio data into a hash values (col.31 lines 51-52). Since all three systems (Davis, Carson and Bertsch) use distributed networks to communicate hashed image data, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to include audio along with the image data to provide more information and allow for better understanding of who and what images and its audio are present. This can also help with the image recognition.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTOPHER S KELLEY whose telephone number is (571)272-7331. The examiner can normally be reached Mon-Fri 6:30 to 3:30 pm alternate Fridays off.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, James Kramer can be reached at 571-272-6783. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHRISTOPHER S KELLEY/Supervisory Patent Examiner, Art Unit 2482