Prosecution Insights
Last updated: April 19, 2026
Application No. 18/682,100

METHOD AND SYSTEM PERFORMING SELECTIVE IMAGE MODIFICATION FOR PROTECTING IDENTITIES

Final Rejection §112
Filed
Feb 07, 2024
Examiner
DHRUV, DARSHAN I
Art Unit
2498
Tech Center
2400 — Computer Networks
Assignee
Brighter Al Technologies GmbH
OA Round
2 (Final)
80%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
351 granted / 439 resolved
+22.0% vs TC avg
Strong +48% interview lift
Without
With
+48.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
22 currently pending
Career history
461
Total Applications
across all art units

Statute-Specific Performance

§101
16.8%
-23.2% vs TC avg
§103
53.0%
+13.0% vs TC avg
§102
5.8%
-34.2% vs TC avg
§112
17.1%
-22.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 439 resolved cases

Office Action

§112
894DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This written action is responding to the amendment dated on 10/30/2025. Claims 1, 4-5, 8-10, 12 have been amended, Claims 2, 6 and 14-16 have been canceled, Claim 17 newly added and all other Claims are previously presented. Claims 1, 3-5, 7-13 and 17 are submitted for examination. Claims 1, 3-5, 7-13 and 17 are pending. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Examiner’s Note Examiner had contacted the Applicant’s representative on November 13, 2025 and discussed unaddressed issues regarding Claim interpretation under 35 U.S.C. 112(f) and corresponding 35 U.S.C. 112(b) rejection for Claim 12. It was agreed that an office action be issued. Examiner has interpreted independent Claim 12 and Claim 17 for 35 U.S.C. 112(f). The Applicant is advised to include hardware processor and/or memory in the Claim, should the Applicant decides to rewrite the claims to avoid claim interpretation under 35 U.S.C. 112(f). Priority This 371 application filed on February 07, 2024 claims priority of PCT application PCT/EP2022/072475 filed on August 10, 2022, and foreign application EP21191073.2 filed on August 12, 2021. Information Disclosure Statement The following Information Disclosure Statements in the instant application submitted in compliance with the provisions of 37 CFR 1.97, and thus, have been fully considered: IDS filed on 07 February 2024. IDS filed on 04 September 2025. Response to Arguments Applicant’s amendment, filed on October 30, 2025 has claims, 1, 4-5, 8-10, 12 amended, Claims 2, 6 and 14-16 canceled, Claim 17 newly added. The prior objections of drawings has been withdrawn in view of the newly submitted drawings on October 30, 2025. The prior objections of Claim 8 has been withdrawn in view of the amendment received on October 30, 2025. The prior rejection of 35 U.S.C. 112(b) of Claims 14, 15 and 16 has been withdrawn in view of the amendment received on October 30, 2025. The prior rejection of 35 U.S.C. 112(b) of Claim 12 is maintained. Claim Objections Claim 1 objected to because of the following informalities: Clam 1 recites limitation(s) “determining, by the first device, that the anonymizable object is associated with a second device, by. calculating, by the first device, a first identifier of the anonymizable obj ect calculating, by the first device, a similarity measure between the first identifier and a second identifier comprised by the second device and determining, by the first device, that the anonymizable object is associated with the second device when the similarity measure is above a predefined similarity threshold; and modifying, by the first device, the anonymizable object in the first image by using privacy setting data received from the second device…..”. Examiner suggest replacing first “by” with “comprises” or removing other “by the first device” from the claim limitations, as previous limitation already recites, “determining by the first device..” Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “a first image capturing unit configured to capture a first image”, “a first processing unit configured to: detect an anonymizable object in the first image”, “the first processing unit is configured to: calculate a first identifier of the anonymizable object”, “wherein the first processing unit is further configured to: determine a private key n”, in claim 12, “a first image capturing unit configured to capture a first image”, “a first processing unit configured to :detect an anonymizable object in the first image”, “the first processing unit is configured to: calculate a first identifier of the anonymizable object”, “wherein the second device comprises a second processing unit that is configured to: determine an individual private key m for the second identifier x2 “, “wherein the first processing unit is further configured to: determine a private key n”, in claim 17. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim limitation “a first image capturing unit configured to capture a first image”, “a first processing unit configured to”, “the first processing unit is configured to: calculate a first identifier of the anonymizable object”, “wherein the first processing unit is further configured to: determine a private key n”, in claim 12, “a first image capturing unit configured to capture a first image”, “a first processing unit configured to :detect an anonymizable object in the first image”, “the first processing unit is configured to: calculate a first identifier of the anonymizable object”, wherein the second device comprises a second processing unit that is configured to: determine an individual private key m for the second identifier x2”, “wherein the first processing unit is further configured to: determine a private key n” in claim 17, invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. “a first image capturing unit” is interpreted as defined in paragraph 77 of the published application (US # 2025/0124168), “the first device comprising a first image capturing unit configured to capture a first image”. It is not clear whether a first image capturing unit is a hardware or a software. “a first processing unit” is interpreted as defined in paragraph 77 of the published application (US # 2025/0124168), “a first processing unit configured” it is not clear whether a first processing unit is a hardware or a software. “a second processing unit” is interpreted as defined in paragraph 86 of the published application (US # 2025/0124168), “wherein second processing unit is configured for” it is not clear whether a first processing unit is a hardware or a software. Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. Applicant may: (a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph; (b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)). If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either: (a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181. Allowable Subject Matter Claims 1, 3-5, 7-13 and 17 are allowable if the 35 U.S.C 112(b) rejection of independent Claim 12 and Claim 17 is overcome. The following is an examiner’s statement of reasons for allowance. Kwatra discloses, an approach is provided for obscuring an individual likeness in a digital image based on a privacy policy. The approach identifies an individual whose likeness appears in a digital image taken by a digital camera with the digital image being stored. A determination is made, based on a privacy policy pertaining to the identified individual, whether to obscure the first individual's likeness in the digital image. Responsive to the determination being positive, the approach obscures the individual's likeness as it appears in the digital image. (Abstract). FIG. 5 is a depiction of a flowchart showing the steps taken during media capture in a system that provides a contextual privacy policy related to captured digital images. FIG. 5 processing commences at 500 and shows the steps taken by a process that performs a media capture operation, such as at a stand-alone digital camera, or a digital camera included in a device such as a smart phone . At step 510, user 300 takes a digital image with the device, such as by looking at a viewfinder or image capture screen and pressing a trigger (e.g., soft key, etc.) to instruct the device to take a single digital image (photo) or a series of digital images (video). The resulting raw digital image is stored in memory area 330. At step 520, the process analyzes the raw image for individuals that might be included in the image that was captured by the device. The process determines as to whether the likeness of one or more individuals were included in the raw image (decision 530). If likenesses of individuals were included in the image, then decision 530 branches to the ‘yes’ branch to perform steps 540 through 570 to apply privacy policies to those individuals. On the other hand, if no individuals were included in the image, then decision 530 branches to the ‘no’ branch bypassing steps 540 through 570 with step 580 being performed to set the processed image data (memory area 380) as being the same as the raw image data (memory area 330). Steps 540 through 570 are performed when individuals are included in the raw image data that was captured by the digital camera device. At step 540, the process initializes the processed image data stored in memory area 380 to be the same as the raw image data that is stored in memory area 330. At step 550, the process selects the likeness of the first individual from the image data. At predefined process 560, the process performs the Gather and Apply Privacy Preferences for Individuals routine (see FIG. 6 and corresponding text for processing details). This routine determines if the likeness of the selected individual should be obscured, such as by blurring the individual's face so that the individual's face is not recognizable. The results of this routine, if likeness obscuring takes place, is an alteration (blurring) of image features of the individual's likeness in the processed image data that is stored in memory area 380. The process next determines whether the likenesses of more individuals were found in the raw image data (decision 570). If more individual likenesses are found, then decision 570 branches to the ‘yes’ branch which loops back to step 550 to select and process the likeness of the next individual as described above. This looping continues until all of the individual likenesses have been processed, at which point decision 570 branches to the ‘no’ branch exiting the loop. At step 590, the process provides the processed image data from memory area 380 to user 300 so that the user can view the image, print the image, post the image to social media or other website(s), etc. FIG. 5 processing thereafter ends at 595. (Fig. 5, ¶42-¶45). Ra teaches, Obfuscating a human or other subject in digital media preserves privacy. A user of a smartphone, for example, may enable a flag for obscuring her face in digital photos or movies. When any device captures digital media, the user's smartphone transmits the flag for receipt. The device capturing the digital media is thus informed of the user's desire to obscure her face or even entire image. The device capturing the digital media may thus perform an obscuration in response to the flag. (Abstract). FIG. 3 illustrates a flowchart of a method 300 for obfuscating an image of a subject in a captured media. In one embodiment, steps, functions and/or operations of the method 300 may be performed by an endpoint device, such as endpoint device 170 in FIG. 1. In one embodiment, the steps, functions, or operations of method 300 may be performed by a computing device or system 600, and/or processor 602 as described in connection with FIG. 6 below. The method begins in step 305 and proceeds to optional step 310. At optional step 310, the method 300 sends a communication/signal indicating an intent to record captured media. For example, a user (e.g., a taker) of a mobile endpoint device may be a participant in a Do-Not-Capture (DNC) system, where the user's mobile endpoint device may be configured to transmit a DNC intent-to-record communication to nearby listening device when the user activates a particular key, access a camera function, and so forth. In one example, the communication may be broadcast using Wi-Fi Direct, or other short-range wireless communication mode. At step 320, the method 300 records the captured media. For example, a device may record a photograph or video (with or without audio) using a camera and/or microphone of the device, or connected to the device. Notably, the media content may include the images of one or more subjects, any of which may desire that his or her image (face) be obfuscated in the captured media. For example, an individual may take a photograph or video of a friend, but may inevitably capture the facial images of various strangers, some or all of whom would prefer to not appear in the media. In one embodiment, prior to or at the same time as the method 300 records the captured media at step 320, the method 300 may further track and gather data regarding motion trajectories of faces/subjects detected by a camera. For example, the method 300 may detect and track movements of faces, or all or a portion of a body, in a field of view of the camera for a short time (e.g., approximately three to ten seconds) prior to recording a photograph or video. At optional step 330, the method 300 sends a communication indicating that the captured media is finished being recorded. For example, the method may stop transmitting an intent-to-capture signal such that nearby listening devices are made aware that the recording of the captured media is complete. In another embodiment, the method 300 may send a new signal that simply indicates that the media capture is complete. At step 340 the method 300 receives a communication from a mobile endpoint device of a subject indicating that the image of the subject should be obfuscated in the captured media. For example, a subject participating in a DNC system may have a mobile endpoint device that is configured to listen for DNC communications indicating an intention to record captured media. In response to detecting such a communication, the listening device may then record orientation/motion information to be provided after the media is captured by the taker's mobile endpoint device. Accordingly, in one embodiment the communication may include a feature set, or feature vector associated with the subject that includes a representation of a face of the subject and/or motion information, or a motion signature, associated with the subject. For example, the motion information may include acceleration vectors and rotation vectors recorded by the mobile endpoint device of the subject in response to receiving the communication/notification sent at step 310. In one embodiment, the communication is received wirelessly, e.g., using Wi-Fi Direct or other near-field communication technique. In one embodiment, the communication may further include a public key of a public/private key pair generated by the device of the subject or otherwise under the control of the subject. At step 350, the method 300 detects the image of the subject in the captured media. For example, the method 300 may perform a matching process as described above to determine a match, or lack of a match to a facial image detected in the captured media. In one embodiment, the method 300 detects all faces in the image using a facial detection algorithm and then attempts to match the facial features of the subject received at step 340 with facial features of each of the detected faces in the image. To enhance the accuracy of the matching, the method 300 may further match the motion information of the subject with trajectories of the facial images detected in the media. For example, as mentioned above, the method 300 may record motion trajectories for faces/subjects detected in the field of view of a camera. Accordingly, if the motion information does not match a motion trajectory, this may assist the method 300 in confirming that the subject and a particular facial image are not a match. It should be noted that in one embodiment, the method 300 may not attempt to match the subject to a facial image in the media if the orientation information received from the device of the subject indicates that the subject was facing away from the camera at the time the captured media was recorded. However, for illustrative purposes it is assumed that this is not the case. In other words, it is assumed that the subject matches one of the images in the captured media (e.g., by determining the Euclidean distance between a projected face from the captured media and the facial features of the subject to determine whether a match score exceeds a threshold confidence value, enhanced by matching a motion trajectory with the motion information received from the subject's device). At step 360, the method 300 obfuscates the image of the subject in the captured media. For example, the method 300 may blur the image of the face of the subject to protect the subject's identity before the captured media is saved or pushed to the Web. Alternatively or in addition, method 300 may use image in-painting, seam carving, pixel inference and other techniques to obscure the image of the subject's face (or a larger portion of a subject's body). In one example, the obfuscation incorporates an encryption of the image of the subject using a public key received at step 340. (Fig. 3, ¶53-¶60). Ohira teaches, FIG. 1 is a system diagram illustrating an overview of the information processing system 1 according to this embodiment. The information processing system 1 is a system that associates the same persons captured by cameras (Person Re-identification (Re-id)) by calculating and collating feature quantities of persons from images captured by a plurality of the cameras 10 (10-1, 10-2, 10-3, . . . ) installed at a station yard, the inside of a commercial facility, a shopping district, or the like. Each of terminals 20 (20-1, 20-2, 20-3, . . . ) connected to the cameras 10 calculates feature quantities of subjects (persons and the like) shown in an image captured by the camera 10 and transmits the calculated feature quantities to a server 30. The server 30 receives feature quantities transmitted from each terminal 20. The server 30 collates feature quantities received from the terminals 20 and associates feature quantities determined as in the same person with each other on the basis of collation results (a degree of similarity). In this way, the information processing system 1 can detect behaviors of the same person and can collect statistical information such as features and a behavior pattern of each person in a yard, inside a commercial facility, on a shopping street, and the like. For example, as illustrated in the drawing, it is assumed that a person U moves in an order of points A, B, and C. The person U at the time of being present at the spot A is shown in an image captured by the camera 10-1. The person U at the time of being present at the spot B is shown in an image captured by the camera 10-2. The person U at the time of being present at the spot C is shown in an image captured by the camera 10-3. The terminal 20-1 calculates a feature quantity of the person U from the image in which the person U at the time of being present at the spot A is shown captured by the camera 10-1 and transmits the calculated feature quantity to the server 30 in association with capture time information. The terminal 20-2 calculates a feature quantity of the person U from the image in which the person U at the time of being present at the spot B is shown captured by the camera 10-2 and transmits the calculated feature quantity to the server 30 in association with capture time information. The terminal 20-3 calculates a feature quantity of the person U from the image in which the person U at the time of being present at the spot C is shown captured by the camera 10-3 and transmits the calculated feature quantity to the server 30 in association with capture time information. The server 30 collates the feature quantities transmitted from the terminals 20 and associates feature quantities determined to be the same person with each other. In this way, the information processing system 1 can detect a behavior of a certain same person (here, the person U). In addition, the information processing system 1 does not perform identification of individuals for identifying a person who is represented by the feature quantities associated as the same person. (Fig. 1, ¶28-¶29). However, none of the art teaches recited claim limitation(s). Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Refer to PTO-892, Notice of References Cited for a listing of analogous art. Bapat et al. (US # 2022/0245396) discloses, a methods for recognizing persons in video streams. In one aspect, a method includes: (1) obtaining a live video stream; (2) detecting person(s) in the stream; and (3) determining, from analysis of the live video stream, personally identifiable information of the detected person(s); (4) determining, based on the personally identifiable information, that the first person is not known to the computing system; (5) in accordance with the determination that the first person is not known: (a) storing the personally identifiable information; and (b) requesting a user to classify the first person; and (6) in accordance with (i) a determination that a predetermined amount of time has elapsed since the request was transmitted and a response was not received, or (ii) a determination that a response was received classifying the first person as a stranger, deleting the stored personally identifiable information. Teissonniere et al. (US # 2021/0350938) discloses, a method may include collecting from each of multiple endpoint devices a set of anonymized interactions of the corresponding endpoint device with other endpoint devices. Each anonymized interaction in the set of anonymized interactions may be based on an ephemeral unique identifier of another endpoint device involved in a corresponding anonymized interaction with the corresponding endpoint device. The method may include, for each endpoint device, resolving identities of the other endpoint devices with which the corresponding endpoint device has interacted from the corresponding set of anonymized interactions. Badalone et al. (US # 2021/0240851) discloses, a method and system for privacy-aware movement tracking includes receiving a series of images of a field of view, such as captured by a camera. The images containing movement of an unidentified person within the field of view. A body region corresponding to the person is detected within the images. A movement dataset for the unidentified person is generated based on tracking movement of the body region over the fired of view within the images is generated. A characterizing feature set is determined for the unidentified person. The set is associated within the movement dataset to form a first track entry. Anonymizing of the body region can be applied to remove identifying features while or prior to determining the characterizing feature set. A second track entry can be generated from a second series of images and match between the track entries can be determined. A method and system for privacy-aware operation and learning of a computer-implemented classification module is also contemplated. Perry et al. (US # 2020/0097767) discloses, a method for training a human perception predictor to determine level of perceived similarity between data samples, the method including: receiving at least one media file, determining at least one identification region for each media file, applying at least one transformation on each identification region for each media file until at least one modified media file is created, receiving input regarding similarity between each modified media file and the corresponding received media file, and training a machine learning model with an objective function configured to predict similarity between media files by a human observer in accordance with the received input. Moloney (US # 2020/0098096) discloses, Examples to selectively generate a masked image include: a convolutional neural network detector to detect a first feature and a second feature in an image captured by a camera; a feature recognizer to determine the first feature is a displayable feature and the second feature is a non-displayable feature by comparing the first and second features of the image to reference feature images stored in a memory; and a blur generator to generate the masked image to display the displayable feature and mask the non-displayable feature. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DARSHAN I DHRUV whose telephone number is (571)272-4316. The examiner can normally be reached M-F 9:00 AM-5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Yin-Chen Shaw can be reached at 571-272-8878. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DARSHAN I DHRUV/Primary Examiner, Art Unit 2498
Read full office action

Prosecution Timeline

Feb 07, 2024
Application Filed
Jul 26, 2025
Non-Final Rejection — §112
Oct 30, 2025
Response Filed
Nov 15, 2025
Final Rejection — §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603788
Managing hygiene of key pairs between certificate authorities using FHE
2y 5m to grant Granted Apr 14, 2026
Patent 12603789
SYSTEMS AND METHODS FOR SECURING INTERCONNECTING DIRECTORIES
2y 5m to grant Granted Apr 14, 2026
Patent 12603767
SYSTEM AND METHOD FOR OPERATING OBJECT
2y 5m to grant Granted Apr 14, 2026
Patent 12603768
SYSTEMS AND METHODS FOR PROVIDING AND MAINTAINING SECURE CLIENT-BASED PERMISSION LISTS
2y 5m to grant Granted Apr 14, 2026
Patent 12592940
ATM INTEGRITY MONITOR (AIM) SYSTEM AND METHOD FOR DETECTING CYBER ATTACKS ON ATMS NETWORKS
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
80%
Grant Probability
99%
With Interview (+48.3%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 439 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month