Prosecution Insights
Last updated: April 17, 2026
Application No. 18/812,995

SYSTEM AND METHOD FOR AN IMAGE EXCHANGE WITH ORIGIN TRACING

Final Rejection §103§112
Filed
Aug 22, 2024
Examiner
LANIER, BENJAMIN E
Art Unit
2437
Tech Center
2400 — Computer Networks
Assignee
unknown
OA Round
2 (Final)
69%
Grant Probability
Favorable
3-4
OA Rounds
3y 6m
To Grant
86%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
632 granted / 913 resolved
+11.2% vs TC avg
Strong +17% interview lift
Without
With
+17.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
32 currently pending
Career history
945
Total Applications
across all art units

Statute-Specific Performance

§101
7.5%
-32.5% vs TC avg
§103
48.1%
+8.1% vs TC avg
§102
17.7%
-22.3% vs TC avg
§112
17.1%
-22.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 913 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment Applicant’s amendment filed 30 January 2026 amends claims 1, 4, 5, 8, 9, 12, 14-16, 19, and 20. Applicant’s amendment has been fully considered and entered. Response to Arguments Applicant argues on page 9 of the response, “Applicant has amended claim 8 by clarifying how unwanted screen capture is prevented. Specifically, it is stated ‘screen capture images can be identified as inauthentic as they lack the unique identification marker.’” In response, this limitation does not clarify how unwanted screen capture is prevented since the added limitation makes it clear that screen capture was not prevented. Instead, the added limitation makes it clear that a screen capture was performed and resulted in a screen capture image that was examined for the unique identification marker. The enablement issues set forth in the Non-Final dated 30 October 2025 (“Non-Final”) remain and therefore, the rejection is maintained. Applicant argues on page 9 of the response, “These amendments resolve the examiner’s concerns by making all claim language clear, specification-supported, unique, and in proper dependent form.” This argument has been fully considered and is persuasive. Therefore, the previous claim objections have been withdrawn. Applicant argues on page 10 of the response, “…Shunock fails to teach the key component of Applicant’s invention: authentication and identification operations used to identify origin and ownership of media.” In response, the claims do not include limitations specific to the “origin” of the media as alleged by Applicant. Applicant argues on page 10 of the response, “Shunock’s disclosure is therefore directed to a fundamentally different technical operation and does not address the problem solved by Applicant’s invention.” In response, the Shunock reference reads on the claim limitations identified below irregardless of whether or not Shunock is directed to a fundamentally different technical operation. There is a clear disconnect between what Applicant has intended to claim and the limitations that are actually present in the current claims. Additionally, the enablement and indefiniteness issues present in the claims (see rejection below), likely contribute to this disconnect. Applicant argues on page 12 of the response, “Moreover, publication of content on Applicant’s platform is conditioned on successful authentication of ownership based on the neural network output. Lee neither discloses nor suggests such dependency on neural network outputs for ownership verification…” This argument is not persuasive because rejection is based on a combination of references and not just the Lee reference. Applicant has failed to address the prior art rejections as presented in the Non-Final. Specifically, Applicant has not addressed the specific claim limitations that are not believed to be disclosed by Lee. In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). Applicant argues on page 12 of the response, “Further, the rejection lacks a clear articulation of the reasons why these features would allegedly have been obvious and, therefore, the rejection cannot be supported per the requirements set forth by the United States Supreme Court in the KSR decision.” This argument is not persuasive because each proposed modification as presented in the Non-Final rejection was followed by an explicitly provided motivation statement. Applicant has failed to address any specific motivation statement presented in the Non-Final. In response to applicant's argument that the examiner's conclusion of obviousness is based upon improper hindsight reasoning, it must be recognized that any judgment on obviousness is in a sense necessarily a reconstruction based upon hindsight reasoning. But so long as it takes into account only knowledge which was within the level of ordinary skill at the time the claimed invention was made, and does not include knowledge gleaned only from the applicant's disclosure, such a reconstruction is proper. See In re McLaughlin, 443 F.2d 1392, 170 USPQ 209 (CCPA 1971). Applicant argues on pages 13-14 of the response, “Essentially, the combination of Shunock, Johnson and Lee would be unsatisfactory for its intended purpose, because it frustrates the purpose of Applicant’s invention.” In response, this is not the test outlined in MPEP 2143.01. Instead, if a proposed medication to the primary reference would render that primary reference unsatisfactory for its intended purpose, then there is no suggestion or motivation to make that proposed modification. Applicant has not made a case that any of the proposed modifications to Shunock would render Shunock unsatisfactory for its intended purpose. Therefore, the argument is unpersuasive. Applicant argues on page 15 of the response, “Indeed, if anything, Shunock, Johnson and Lee teach away from Applicant’s invention as claimed, because the usefulness of Applicant’s invention was not even remotely appreciated.” In response, the “usefulness” of the Applicant’s invention is not the test for teaching away. Instead, if the cited prior art teaches away from a proposed modification in the rejection, then the references can be said to teach away from a proposed modification. Applicant has failed to present any evidence that the references teach away from the proposed modifications presented in the Non-Final. Applicant argues on page 15 of the response, “Further, authentication has been bolstered with the direct method to do so, as accomplished by AI algorithms and the result being issuance of a digital certificate.” This argument has been fully considered and is persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new grounds of rejection is made in view of Willardson, U.S. Publication No. 2024/0028986. Applicant argues on page 16 of the response, “Venkataraman does not disclose detecting misappropriation of content, nor does Venkataraman’s system disclose automatically identifying infringement or generating alerts in response thereto.” In response, Venkataraman was never relied upon to disclose these claim limitations. The limitations in question were added in the current response and have been fully addressed below. Applicant argues on page 17 of the response, “Further, amendments to Claim 16 have been incorporated to clarify the methods of training AI algorithms undergo to detect copyright infringement of content.” In response, claim 16 does not include any claim limitations specific to the training of AI algorithms as alleged by Applicant. Applicant argues on page 17 of the response, “Rosenberg does not disclose or suggest the use of unique identification markers as a mechanism for protecting content ownership rights, nor does it disclose generating unique identifiers that are intentionally non-searchable by users.” In response, the claims do not include these particular features argued by Applicant. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the enablement requirement. Claims 1, 9, and 16, contain subject matter which was not described in the specification in such a way as to enable one skilled in the art to which it pertains, or with which it is most nearly connected, to make and/or use the invention. The breadth of the claims requires publishing an output if an input is confirmed to be wholly belonging to an owner. The nature of the invention and the state of the prior art makes it clear that ownership can be determined in a myriad of ways dependent upon the type of owned object. Dependent claims 2 and 3 suggest that this owned object is media/image. For such content, the concept of ownership is often addressed using information embedded at the time of content creation as evidenced by U.S. Patent No. 5,636,292. However, Applicant’s specification mentions this “belonging” concept in only three sections (Page 2, lines 23-24 & Page 6, lines 18-19 & Page 7, lines 9-10). In each of these sections, the specification merely mentions that an image is published if the image has no history of redistribution and is wholly the owner’s. However, the specification provides no detail regarding how this ownership is confirmed as required by the claims. Additionally, the specification provides no detail regarding what constitutes being “wholly” owned as required by the claims. Therefore, while one of ordinary skill in the art may be enabled to determine the creator of an image based upon the nature of the invention and the state of the prior art, the specification fails to provide sufficient direction regarding how such content is confirmed to “wholly belong” to an owner without undue experimentation. Claims 8, 9, and 16, contain subject matter which was not described in the specification in such a way as to enable one skilled in the art to which it pertains, or with which it is most nearly connected, to make and/or use the invention. The breadth of the claims requires the automatic detection of “misappropriation”. The nature of the invention and the state of the prior art makes it clear that such an automated detection would require clearly defining what constitutes a “misappropriation” condition such that detection can be automated when such conditions occur (as evidenced by U.S. Patent No. 11,646,866). However, Applicant’s specification mentions “misappropriation” in only one section (Page 3, lines 9-12). This section does not clearly define how misappropriation is detected, nor what would constitute misappropriation as claimed. Therefore, the amount of direction provided by the inventor does not enable one of ordinary skill in the art to make and use the claimed invention without undue experimentation. Claims 8, 9, and 16, contain subject matter which was not described in the specification in such a way as to enable one skilled in the art to which it pertains, or with which it is most nearly connected, to make and/or use the invention. The breadth of the claims requires the utilization of a unique identification marker to prevent unwanted screen captures. The nature of the invention and the state of the prior art makes it clear that preventing screen capture functionality can be performed by simply disable the screen capture functionality as demonstrated in US Patent No. 10,382,620. Additionally, the screen capture functionality can be prevented using masking procedures that prevent the screen capture functionality from accessing the information being displayed to the user at the time screen capture functionality is actuated as demonstrated in US Publication No. 2015/0302600. However, Applicant’s specification merely discloses that “The image then generates a unique identification marker which can be any form of high-bandwidth digital content protection, by way of example and not limitation, metadata preservation schemas, digital certificates, non-fungible tokens and certificates, a blockchain ledger, and or a timed publication, which limits certain individuals or all individuals from attempting to screen capture a piece of media, or more generally preventing unwanted screen capture of a piece of media.” The specification does not explain how any form of unique identification marker described by the specification is utilized to prevent screen captures. One of ordinary skill in the art would understand that time publications only dictate when the output is displayed, but would not prevent screen capture at the time of publication. Therefore, the amount of direction provided by the inventor does not enable one of ordinary skill in the art to make and use the claimed invention without undue experimentation. Claims 2-7, 10-15, and 17-20 are rejected based upon their dependence upon claims 1, 9, and 16, respectively. Claims 9-15 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claims contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventors, at the time the application was filed, had possession of the claimed invention. Claim 9 includes a unique identification marker and a digital certificate. However, the specification discloses that the digital certificate is the unique identification marker (Page 6, lines 19-21). Therefore, the specification does not support the claims requirement of a unique identification marker and a separate digital certificate. Claims 10-15 are rejected based upon their dependence upon claim 9. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 1, 9, and 16, include the limitation publish an output if an input is confirmed to be wholly belonging to an owner, which renders the claims indefinite because it is generally unclear from the specification how this ownership is confirmed and what constitutes being “wholly owned” as claimed. The specification mentions this concept in only three sections (Page 2, lines 23-24 & Page 6, lines 18-19 & Page 7, lines 9-10). In each of these sections, the specification merely mentions that an image is published if the image has no history of redistribution and is wholly the owner’s. However, the specification provides no detail regarding how this ownership is confirmed as required by the claims. Additionally, the specification provides no detail regarding what constitutes being “wholly” owned as required by the claims. The claims, as viewed in light of the specification, fail to provide an objective standard for determining the claim’s boundaries. Additionally, claims 9 and 16, include the limitation of automatically detecting misappropriation, which renders the claim indefinite because the specification does not specify how the misappropriation is detected. Nor does the claim define what constitutes “misappropriation” as claimed. The specification mentions “misappropriation” in only one section (Page 3, lines 9-12). However, this section does not clearly define how misappropriation is detected, nor what would constitute misappropriation as claimed. The claims, as viewed in light of the specification, fail to provide an objective standard for determining the claim’s boundaries. Claims 2-8, 10-15, and 17-20 are rejected based upon their dependence upon claims 1, 9, and 16, respectively. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-5, 7 are rejected under 35 U.S.C. 103 as being unpatentable over Shunock, U.S. Publication No. 2023/0153347, in view of Johnson, U.S. Patent No. 8,798,401, and further in view of Lee, U.S. Publication No. 2023/0153419. Referring to claim 1, Shunock discloses an image annotation system that includes a first database and a second database (Figure 1, elements 108 & 122: either database could read on the claimed databases), which meets the limitation of an information database and an image database. The system additionally includes third party websites (Figure 1, 112 & [0035]: third party website reads on the claimed web archive to the extent that the claimed web archive is never functionally utilized nor does the claim recite structure for the claimed web archive) and a network (Figure 1, 110 & [0035]), which meets the limitation of a web archive, and a network. A user operating a mobile device that is implementing image annotation utility, sends an image to a server ([0086]: image annotation utility reads on the claimed application software), which meets the limitation of a program controller connected to said memory unit and configured to retrieve specific images based on a set of programmed controls via an application’s software. The server includes a processor and memory storing instruction executable by the processor ([0034]), which meets the limitation of at least one processor in communication with said memory unit, wherein said memory unit contains computer-readable instructions, which when executed by the said processor. The server utilizes a statistical tracking utility to track sharing of the image by searching the database ([0104] & [0132] & [0135]: uploaded image reads on the claimed input; statistical tracking utility utilizes the search utility), which meets the limitation of cross-reference said input in a [plurality] of databases, employ, [via a machine algorithm, a convolutional neural network] configured to cross-reference a database of locally stored [and] cloud-stored memory-stored images by at least cross-referencing. The analytics viewing utility utilizes analytics to provide the user with a graphical output that displays the number of times that the image has been shared across various media ([0143]: graphical output reads on the claimed output) such that the analytics can provide a sense of ownership ([0142]: the concept of a “sense of ownership” reads on the claimed input confirmed to be wholly belonging to an owner to the extent understood since Applicant’s specification fails to define what “wholly belonging” means and how this “belonging” is confirmable as claimed), which meets the limitation of determine, via said convolutional [neural network], a redistribution history of said input, publish an output if an input is confirmed to be wholly belonging to an owner. The server generates a unique image identifier for the image ([0087]) wherein the server functionality is implemented as executable instructions stored in memory and executed by a processor in the server ([0034]: instructions that correspond with the identifier generation read on the claimed content creator upload module; Examiner notes that the claim limitations that follow the “configured to” language represents intended use since the claims do specifically require the functionality to be performed. A recitation of the intended use of the claimed invention must result in a structural difference between the claimed invention and the prior art in order to patentably distinguish the claimed invention from the prior art. If the prior art structure is capable of performing the intended use, then it meets the claim.), which meets the limitation of generate a unique identification marker as a result of an input authentication process, wherein said input authentication process is triggered by a content creator upload module configured to allow user uploads that initiate said convolutional neural network to determine image redistribution status and publish said upload with said unique identification marker, once it is confirmed that said upload wholly belongs to said content creator. Shunock does not specify if “the database” utilized by the search utility is local database (Figure 1, 108) or remote database (Figure 1, 122: remote would read on the cloud-based stored). However, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the search utility to have searched both databases (Figure 1, elements 108 & 122) because such an embodiment of database searching represents one of a finite number of possible database searches that could have been implemented by one of ordinary skill in the art with a reasonable expectation of success. Shunock does not disclose that the images include time/date metadata. Johnson discloses images with metadata that includes time and date information relative to when the image was captured (Col. 7, lines 55-56 & Col. 8, lines 57-58) such that when the image is uploaded, the metadata is analyzed (Col. 11, line 52 – Col. 12, line 25), which meets the limitation of confirm, via metadata of an input, a time and date of said input. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the images of Shunock to have included time and date metadata in order to allow for the server to identify related images as suggested by Johnson (Col. 1, line 62 – Col. 2, line 5). Shunock does not disclose that the search utility implements a neural network. Lee discloses the utilization of a neural network implementing an algorithm that performs image searching ([0086]), which meets the limitation of employ, via a machine algorithm, a convolutional neural network configured to cross-reference a database of locally stored and cloud-stored memory-stored images. The results of the image searching can be utilized to train the neural network ([0215]), which meets the limitation of wherein said output is implemented into a network’s machine learning database. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the search utility of Shunock to have implemented a neural network in order to provide accurate, reliable, and quick search results as suggested by Lee ([0243]). Referring to claims 2, 3, Shunock discloses that the user operating a mobile device that is implementing image annotation utility, sends an image to a server ([0086]), which meets the limitation of wherein said input is a piece of media, wherein said input is an image. Referring to claim 4, Shunock discloses that the analytics viewing utility provides the user with a graphical output that displays the number of times that the image has been shared across various media ([0143]), which meets the limitation of wherein said output is a cross-referenced piece of media acquired from a search based on said input. Shunock does not disclose that the search utility implements a neural network. Lee discloses the utilization of a neural network implementing an algorithm that performs image searching ([0086]), which meets the limitation of an innate machine learning database of uploaded inputs. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the search utility of Shunock to have implemented a neural network in order to provide accurate, reliable, and quick search results as suggested by Lee ([0243]). Referring to claim 5, Shunock does not disclose that the images include time/date metadata. Johnson discloses images with metadata that includes time and date information, manually entered by a user, and relative to when the image was captured (Col. 7, lines 55-56 & Col. 8, lines 57-62: Examiner notes that the “for verifying” language represents intended use. A recitation of the intended use of the claimed invention must result in a structural difference between the claimed invention and the prior art in order to patentably distinguish the claimed invention from the prior art. If the prior art structure is capable of performing the intended use, then it meets the claim.), which meets the limitation of wherein said system prompts a user to input a date and time of an input for verifying ownership of said input by machine learning algorithm. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the images of Shunock to have included time and date metadata in order to allow for the server to identify related images as suggested by Johnson (Col. 1, line 62 – Col. 2, line 5). Referring to claim 7, Shunock does not disclose that the search utility implements a neural network. Lee discloses the utilization of a neural network implementing an algorithm that performs image searching ([0086]) and is implemented using hardware that includes a CPU and GPU ([0126]), which meets the limitation of wherein said convolution neural network operates on a device connected to a central processing unit and a graphical processing unit. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the search utility of Shunock to have implemented a neural network in order to provide accurate, reliable, and quick search results as suggested by Lee ([0243]). Claims 6 are rejected under 35 U.S.C. 103 as being unpatentable over Shunock, U.S. Publication No. 2023/0153347, in view of Johnson, U.S. Patent No. 8,798,401, in view of Lee, U.S. Publication No. 2023/0153419, and further in view of Rosenberg, U.S. Publication No. 2019/0156055. Referring to claim 6, Shunock discloses that database 122 is representative of a remote location ([0044]). However, Shunock does not specify that the remote location of database 122 includes a server. Rosenberg discloses the storage of images on a plurality of databases coupled to a server ([0007]), which meets the limitation of wherein the plurality of databases are stored within a central secure server. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the remote location of Shunock to have included a server coupled to a plurality of image storing databases in order to provide efficient image storage in a manner that provides timely image delivery as suggested by Rosenberg ([0005]). Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Shunock, U.S. Publication No. 2023/0153347, in view of Johnson, U.S. Patent No. 8,798,401, in view of Lee, U.S. Publication No. 2023/0153419, and further in view of Venkataraman, U.S. Publication No. 2017/0269976. Referring to claim 8, Shunock discloses that the analytics viewing utility provides the user with a graphical output that displays the number of times that the image has been shared across various media ([0143]). Shunock does not disclose that screen capture of the graphical output can be prevented. Venkataraman discloses a graphical user interface (GUI) that displays data ([0018]) in sessions identified by a unique identifier ([0029]) and in screens having a screen identifier ([0031]), which meets the limitation of generate a unique identification marker. A feedback agent utilizes the identifiers in order to determine whether or not screen capture functionality can be performed ([0032] & [0040]), which meets the limitation of wherein said unique identification marker is configured to prevent unwanted screen capture of said output. A comparison failure occurs if the captured screens do not include a session identifier ([0056]: comparison failure would be considered a lack of said unique identification marker. The concept of “inauthentic” is an implication of the functional result, and not give patentable weight.), which meets the limitation of wherein the screen capture images can be identified as inauthentic as they lack said unique identification marker. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the system of Shunock to provide a means to prevent screen capture of the graphical output in order to reduce the risk of personally identifiable information (PII) exposure as suggested by Venkataraman ([0012]). Claims 9-12, 14 are rejected under 35 U.S.C. 103 as being unpatentable over Shunock, U.S. Publication No. 2023/0153347, in view of Johnson, U.S. Patent No. 8,798,401, in view of Lee, U.S. Publication No. 2023/0153419, in view of Venkataraman, U.S. Publication No. 2017/0269976, and further in view of Willardson, U.S. Publication No. 2024/0028986. Referring to claim 9, Shunock discloses an image annotation system that includes a first database and a second database (Figure 1, elements 108 & 122: either database could read on the claimed databases), which meets the limitation of a plurality of databases. The system additionally includes third party websites (Figure 1, 112 & [0035]: third party website reads on the claimed web archive to the extent that the claimed web archive is never functionally utilized nor does the claim recite structure for the claimed web archive) and a network (Figure 1, 110 & [0035]). The server includes a processor and memory storing instruction executable by the processor ([0034]), which meets the limitation of a platform with a software application located on a central processing unit. A user operating a mobile device that is implementing image annotation utility, sends an image to the server ([0086]: image annotation utility reads on the claimed application software), which meets the limitation of receiving, via a platform with a software application located on a central processing unit, an input from a user. The server utilizes a statistical tracking utility to track sharing of the image by searching the database ([0104] & [0132] & [0135]: uploaded image reads on the claimed input; statistical tracking utility utilizes the search utility), which meets the limitation of employing, [via a machine algorithm] associated with said software application, convolutional [neural network] configured to cross-reference a database of locally stored [and] cloud-stored memory-stored images by at least cross-referencing. The analytics viewing utility utilizes analytics to provide the user with a graphical output that displays the number of times that the image has been shared across various media ([0143]: graphical output reads on the claimed output) such that the analytics can provide a sense of ownership ([0142]: the concept of a “sense of ownership” reads on the claimed input confirmed to be wholly belonging to an owner to the extent understood since Applicant’s specification fails to define what “wholly belonging” means and how this “belonging” is confirmable as claimed), which meets the limitation of determining, via said convolutional [neural network] operating on a device connected to said central processing unit [and a graphical processing unit], a redistribution history of said input, publish, via said software application an output in an input is configured to be wholly belonging to an owner, authenticating an input via [convolutional neural network] output of said redistribution history of said input to confirm if said input wholly belongs to said owner it was inputted from, wherein said authentication is configured to [automatically] detect misappropriation and report it by an [AI] algorithm operating on said platform. Shunock does not specify that the analysis is performed automatically. However, it is well settled that it is not "invention" to broadly provide a mechanical or automatic means to replace manual activity which has accomplished the same result (In re Venner, 120 USPQ 192). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the analysis of Shunock to have been performed automatically because it is well settled that it is not "invention" to broadly provide a mechanical or automatic means to replace manual activity which has accomplished the same result (In re Venner, 120 USPQ 192). Shunock does not specify if “the database” utilized by the search utility is local database (Figure 1, 108) or remote database (Figure 1, 122: remote would read on the cloud-based stored). However, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the search utility to have searched both databases (Figure 1, elements 108 & 122) because such an embodiment of database searching represents one of a finite number of possible database searches that could have been implemented by one of ordinary skill in the art with a reasonable expectation of success. Shunock does not disclose that the images include time/date metadata. Johnson discloses images with metadata that includes time and date information manually entered by a user, and relative to when the image was captured (Col. 7, lines 55-56 & Col. 8, lines 57-62) such that when the image is uploaded, the metadata is analyzed (Col. 11, line 52 – Col. 12, line 25), which meets the limitation of prompting said user to provide information relevant to said input, confirm, via metadata of an input, a time and date of said input. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the images of Shunock to have included time and date metadata in order to allow for the server to identify related images as suggested by Johnson (Col. 1, line 62 – Col. 2, line 5). Shunock does not disclose that the search utility implements a neural network. Lee discloses the utilization of a neural network implementing an algorithm that performs image searching ([0086]), which meets the limitation of employing, via a machine algorithm associated with said software application, a convolutional neural network configured to cross-reference a database of locally stored and cloud-stored memory-stored images, determining, via said convolutional neural network operating on a device, by an AI algorithm operating on said platform. Lee discloses the utilization of a neural network implementing an algorithm that performs image searching ([0086]) and is implemented using hardware that includes a CPU and GPU ([0126]), which meets the limitation of wherein said convolution neural network operates on a device connected to a central processing unit and a graphical processing unit. The results of the image searching can be utilized to train the neural network ([0215]), which meets the limitation of wherein said output is implemented into a network’s machine learning database. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the search utility of Shunock to have implemented a neural network in order to provide accurate, reliable, and quick search results as suggested by Lee ([0243]). Shunock does not disclose that screen capture of the graphical output can be prevented. Venkataraman discloses a graphical user interface (GUI) that displays data ([0018]) in sessions identified by a unique identifier ([0029]) and in screens having a screen identifier ([0031]), which meets the limitation of generating, via said software application, a unique identification marker. A feedback agent utilizes the identifiers in order to determine whether or not screen capture functionality can be performed ([0032] & [0040]), which meets the limitation of a unique identification marker configured to prevent unwanted screen capturing of said output. A comparison failure occurs if the captured screens do not include a session identifier ([0056]), which meets the limitation of wherein the original images can be identified by said unique identifier marker. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the system of Shunock to provide a means to prevent screen capture of the graphical output in order to reduce the risk of personally identifiable information (PII) exposure as suggested by Venkataraman ([0012]). Shunock does not disclose the awarding of a digital certificate. Willardson discloses performing validation procedures on uploaded images such that upon completion of the validation procedure a token certificate is generated for the uploaded images ([0042]), which meets the limitation of awarding a digital certificate for every authenticated input to confirm the owner. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the uploaded images in Shunock to have been validated such that token certificates are generated upon completion of the validation in order to provide proof of ownership as suggested by Willardson ([0042]). Referring to claims 10, 11, Shunock discloses that the user operating a mobile device that is implementing image annotation utility, sends an image to a server ([0086]), which meets the limitation of wherein said input is a piece of media, wherein said input is an image. Referring to claim 12, Shunock discloses that the analytics viewing utility provides the user with a graphical output that displays the number of times that the image has been shared across various media ([0143]), which meets the limitation of wherein said output is a cross-referenced piece of media acquired from a search based on said input. Shunock does not disclose that the search utility implements a neural network. Lee discloses the utilization of a neural network implementing an algorithm that performs image searching ([0086]), which meets the limitation of a machine learning database of uploads innate to the platform. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the search utility of Shunock to have implemented a neural network in order to provide accurate, reliable, and quick search results as suggested by Lee ([0243]). Referring to claim 14, Shunock does not disclose that the search utility implements a neural network. Lee discloses the utilization of a neural network implementing an algorithm that performs image searching ([0086]) such that the neural network performs a convolution operation to obtain a feature map ([0020]), which meets the limitation of wherein said convolution neural network is used to perform feature extraction by generating feature maps able to be implemented into a networks machine learning database. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the search utility of Shunock to have implemented a neural network in order to provide accurate, reliable, and quick search results as suggested by Lee ([0243]). Claims 13, 15 are rejected under 35 U.S.C. 103 as being unpatentable over Shunock, U.S. Publication No. 2023/0153347, in view of Johnson, U.S. Patent No. 8,798,401, in view of Lee, U.S. Publication No. 2023/0153419, in view of Venkataraman, U.S. Publication No. 2017/0269976, in view of Willardson, U.S. Publication No. 2024/0028986, and further in view of Rosenberg, U.S. Publication No. 2019/0156055. Referring to claim 13, Shunock discloses that database 122 is representative of a remote location ([0044]). However, Shunock does not specify that the remote location of database 122 includes a server. Rosenberg discloses the storage of images on a plurality of databases coupled to a server ([0007]), which meets the limitation of wherein said plurality of databases are stored within a central secure server. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the remote location of Shunock to have included a server coupled to a plurality of image storing databases in order to provide efficient image storage in a manner that provides timely image delivery as suggested by Rosenberg ([0005]). Referring to claim 15, Shunock discloses a user operating a mobile device that is implementing image annotation utility, sends an image to the server ([0086]) such that the images originate from webpages ([0056]), which meets the limitation of wherein said plurality of databases receive [APIs and] image streams from a plurality of external online web pages and sources. Shunock discloses that database 122 is representative of a remote location ([0044]). However, Shunock does not specify that the remote location of database 122 includes a server receiving APIs. Rosenberg discloses the storage of images on a plurality of databases coupled to a server ([0007]) such that the images are transmitted using APIs ([0065]), which meets the limitation of wherein said plurality of databases receive APIs and image streams from a plurality of external online web pages and sources. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the remote location of Shunock to have included a server coupled to a plurality of image storing databases in order to provide efficient image storage in a manner that provides timely image delivery as suggested by Rosenberg ([0005]). Claims 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over Shunock, U.S. Publication No. 2023/0153347, in view of Johnson, U.S. Patent No. 8,798,401, in view of Lee, U.S. Publication No. 2023/0153419, in view of Venkataraman, U.S. Publication No. 2017/0269976, and further in view of Rosenberg, U.S. Publication No. 2019/0156055. Referring to claim 16, Shunock discloses an image annotation system that includes a first database and a second database (Figure 1, elements 108 & 122: either database could read on the claimed databases), which meets the limitation of an information database and an image database. The system additionally includes third party websites (Figure 1, 112 & [0035]: third party website reads on the claimed web archive to the extent that the claimed web archive is never functionally utilized nor does the claim recite structure for the claimed web archive) and a network (Figure 1, 110 & [0035]), which meets the limitation of a web archive, and a network. A user operating a mobile device that is implementing image annotation utility, sends an image to a server ([0086]: image annotation utility reads on the claimed application software), which meets the limitation of a program controller connected to said memory unit and configured to retrieve specific images based on a set of programmed controls via an application’s software. The server includes a processor and memory storing instruction executable by the processor ([0034]), which meets the limitation of at least one central processing unit in communication with said memory unit, wherein said memory unit contains computer-readable instructions, which when executed by the said central processing unit. The server utilizes a statistical tracking utility to track sharing of the image by searching the database ([0104] & [0132] & [0135]: uploaded image reads on the claimed input; statistical tracking utility utilizes the search utility), which meets the limitation of cross-reference said input in a [plurality] of databases [stored within a central secure server], employ, [via a machine algorithm, a convolutional neural network] operating on a device connected to said central processing unit [and a graphical processing unit] configured to cross-reference a database of locally stored [and] cloud-stored memory-stored images by at least croess-referencing. The analytics viewing utility utilizes analytics to provide the user with a graphical output that displays the number of times that the image has been shared across various media ([0143]: graphical output reads on the claimed output) such that the analytics can provide a sense of ownership ([0142]: the concept of a “sense of ownership” reads on the claimed input confirmed to be wholly belonging to an owner to the extent understood since Applicant’s specification fails to define what “wholly belonging” means and how this “belonging” is confirmable as claimed), which meets the limitation of determine, via said convolutional [neural network], a redistribution history of said input, publish an output if an output is confirmed to be wholly belonging to an owner, as determined by said [convolutional neural network], detect [automatically] misappropriation and report via an [AI] algorithm operating on said system, wherein said [AI] algorithm is also configured to detect copyright infringement by content-based identification measures, natural processing algorithms, and learning word vector coding. Examiner notes that the “configured to” language represents intended use. A recitation of the intended use of the claimed invention must result in a structural difference between the claimed invention and the prior art in order to patentably distinguish the claimed invention from the prior art. If the prior art structure is capable of performing the intended use, then it meets the claim. Shunock does not specify that the analysis is performed automatically. However, it is well settled that it is not "invention" to broadly provide a mechanical or automatic means to replace manual activity which has accomplished the same result (In re Venner, 120 USPQ 192). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the analysis of Shunock to have been performed automatically because it is well settled that it is not "invention" to broadly provide a mechanical or automatic means to replace manual activity which has accomplished the same result (In re Venner, 120 USPQ 192). Shunock does not specify if “the database” utilized by the search utility is local database (Figure 1, 108) or remote database (Figure 1, 122: remote would read on the cloud-based stored). However, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the search utility to have searched both databases (Figure 1, elements 108 & 122) because such an embodiment of database searching represents one of a finite number of possible database searches that could have been implemented by one of ordinary skill in the art with a reasonable expectation of success. Shunock does not disclose that the images include time/date metadata. Johnson discloses images with metadata that includes time and date information manually entered by a user, and relative to when the image was captured (Col. 7, lines 55-56 & Col. 8, lines 57-62) such that when the image is uploaded, the metadata is analyzed (Col. 11, line 52 – Col. 12, line 25), which meets the limitation of receive, via a platform associated with a software application located on said central processing unit, an input from a user, confirm, via metadata of an input, a time and date of said input. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the images of Shunock to have included time and date metadata in order to allow for the server to identify related images as suggested by Johnson (Col. 1, line 62 – Col. 2, line 5). Shunock does not disclose that the search utility implements a neural network. Lee discloses the utilization of a neural network implementing an algorithm that performs image searching ([0086]), which meets the limitation of employ, via a machine algorithm, a convolutional neural network configured to cross-reference a database of locally stored and cloud-stored memory-stored images, via an AI algorithm operating on said system. Lee discloses the utilization of a neural network implementing an algorithm that performs image searching ([0086]) and is implemented using hardware that includes a CPU and GPU ([0126]), which meets the limitation of a convolution neural network operating on a device connected to said central processing unit and a graphical processing unit. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the search utility of Shunock to have implemented a neural network in order to provide accurate, reliable, and quick search results as suggested by Lee ([0243]). Shunock does not disclose that screen capture of the graphical output can be prevented. Venkataraman discloses a graphical user interface (GUI) that displays data ([0018]) in sessions identified by a unique identifier ([0029]) and in screens having a screen identifier ([0031]), which meets the limitation of generate a unique identification marker. A feedback agent utilizes the identifiers in order to determine whether or not screen capture functionality can be performed ([0032] & [0040]), which meets the limitation of a unique identification marker configured to prevent unwanted screen capture of said output. A comparison failure occurs if the captured screens do not include a session identifier ([0056]), which meets the limitation of wherein the screen captured outputs can be identified as inauthentic as they lack said unique identifier marker. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the system of Shunock to provide a means to prevent screen capture of the graphical output in order to reduce the risk of personally identifiable information (PII) exposure as suggested by Venkataraman ([0012]). Shunock discloses that database 122 is representative of a remote location ([0044]). However, Shunock does not specify that the remote location of database 122 includes a server. Rosenberg discloses the storage of images on a plurality of databases coupled to a server ([0007]), which meets the limitation of a plurality of databases stored within a central secure server. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the remote location of Shunock to have included a server coupled to a plurality of image storing databases in order to provide efficient image storage in a manner that provides timely image delivery as suggested by Rosenberg ([0005]). Referring to claims 17, 18, Shunock discloses that the user operating a mobile device that is implementing image annotation utility, sends an image to a server ([0086]), which meets the limitation of wherein said input is a piece of media, wherein said input is an image. Referring to claim 19, Shunock discloses that the analytics viewing utility provides the user with a graphical output that displays the number of times that the image has been shared across various media ([0143]), which meets the limitation of wherein said output is a cross-referenced piece of media acquired from a search based on said input. Shunock does not disclose that the search utility implements a neural network. Lee discloses the utilization of a neural network implementing an algorithm that performs image searching ([0086]), which meets the limitation of a machine learning database of uploads innate to the platform. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the search utility of Shunock to have implemented a neural network in order to provide accurate, reliable, and quick search results as suggested by Lee ([0243]). Referring to claim 20, Shunock does not disclose that the images include time/date metadata. Johnson discloses images with metadata that includes time and date information, manually entered by a user, and relative to when the image was captured (Col. 7, lines 55-56 & Col. 8, lines 57-62: Examiner notes that the “for verifying” language represents intended use. A recitation of the intended use of the claimed invention must result in a structural difference between the claimed invention and the prior art in order to patentably distinguish the claimed invention from the prior art. If the prior art structure is capable of performing the intended use, then it meets the claim.), which meets the limitation of wherein said system prompts a user to input a date and time of an input for verifying ownership of said input by machine learning algorithm. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the images of Shunock to have included time and date metadata in order to allow for the server to identify related images as suggested by Johnson (Col. 1, line 62 – Col. 2, line 5). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to BENJAMIN E LANIER whose telephone number is (571)272-3805. The examiner can normally be reached M-Th: 6:20-4:50. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexander Lagor can be reached at 5712705143. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BENJAMIN E LANIER/ Primary Examiner, Art Unit 2437
Read full office action

Prosecution Timeline

Aug 22, 2024
Application Filed
Oct 28, 2025
Non-Final Rejection — §103, §112
Jan 30, 2026
Response Filed
Feb 24, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602474
USE OF AN APPLICATION CONTROLLER TO MONITOR AND CONTROL SOFTWARE FILE AND APPLICATION ENVIRONMENTS
2y 5m to grant Granted Apr 14, 2026
Patent 12598079
DIGITAL SIGNATURES WITH KEY-DERIVATION
2y 5m to grant Granted Apr 07, 2026
Patent 12587541
SECURE CONNECTION BROKER FOR SWARM COMMUNICATIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12566846
TURING MACHINE AGENT FOR BEHAVIORAL THREAT DETECTION
2y 5m to grant Granted Mar 03, 2026
Patent 12566884
MULTIMODAL FINGERPRINTING OF DIGITAL ASSETS
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
69%
Grant Probability
86%
With Interview (+17.0%)
3y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 913 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in for Full Analysis

Enter your email to receive a magic link. No password needed.

Free tier: 3 strategy analyses per month