Prosecution Insights
Last updated: April 19, 2026
Application No. 18/854,985

A SYSTEM AND METHOD FOR ANONYMIZING VIDEOS

Non-Final OA §101§102§103§112
Filed
Oct 08, 2024
Examiner
LIN, AMIE CHINYU
Art Unit
2436
Tech Center
2400 — Computer Networks
Assignee
Kartik Mangudi Varadarajan
OA Round
1 (Non-Final)
85%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 85% — above average
85%
Career Allow Rate
254 granted / 300 resolved
+26.7% vs TC avg
Strong +30% interview lift
Without
With
+30.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
9 currently pending
Career history
309
Total Applications
across all art units

Statute-Specific Performance

§101
13.7%
-26.3% vs TC avg
§103
44.9%
+4.9% vs TC avg
§102
17.0%
-23.0% vs TC avg
§112
17.5%
-22.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 300 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This Office Action is in response to the communication filed on 10/08/2024. Claims 1-22 are pending. Claim Objections Claim 22 is objected to because of the following informalities: “The system according to claim 19 comprising:” recited in claim 22 should read “The method according to claim 19 comprising:”. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 3-5, 14-15, 19, and 21-22 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Claim 3: It’s unclear whether “the video frame” recited in last line refers to “a video frame” in line 2 of claim 3, “a video frame” in line 4 of claim 3, or some other video frame. For the purpose of examination, “the video frame” has been interpreted as referring to any video frame. Note that claim 19 also has this similar issue. Claim 4: There is insufficient antecedent basis for the limitation “the preceding or following tagged video frames”. It’s also unclear if this term refers to “immediately preceding and following video frames” in claim 4, or some other preceding or following tagged video frames. For the purpose of examination, “the preceding or following tagged video frames” has been interpreted as referring to any preceding or following tagged video frames. Note that claim 19 also has this similar issue. Also, there is insufficient antecedent basis for the limitation “the value of the incorrectly tagged video frames” in claim 4. Claim 5: There is insufficient antecedent basis for the limitation “the incorrectly tagged video frame”, “the determined incorrectly tagged video frame”, and “the inspected set of consecutive preceding and following video frame”. Note that claim 19 also has this similar issue. Also, there is insufficient antecedent basis for the limitation “the tagged values of the set of consecutive preceding and following video frames”, “the tagged value of the determined incorrectly tagged video frame”, and “the value of at least one of the video frames among the inspected set of consecutive preceding and following video frame” in claim 5. Claim 14: It’s unclear whether “the video frames” recited in line 2 refers to “a plurality of video frames” in claim 1, “the video frames among the inspected set of consecutive preceding and following video frame” in claim 5, or some other video frames. For the purpose of examination, this term has been interpreted as referring to any video frames. Also, it’s unclear whether “the tag values” recited in line 11 refers to “tag values” in line 6 of claim 14, “tag values” in line 8 of claim 14, “tag values” in line 10 of claim 14, or some other tag values. For the purpose of examination, “the tag values” has been interpreted as referring to any tag values. Note that claims 19 and 22 also have similar issues. Claim 15: It’s unclear whether “the video frames” recited in line 2 refers to “a plurality of video frames” in claim 1, “the video frames among the inspected set of consecutive preceding and following video frame” in claim 5, “video frames” in claim 15, or some other video frames. For the purpose of examination, this term has been interpreted as referring to any video frames. Also, there is insufficient antecedent basis for the limitation “updated tag values of the video frames” in claim 15. Claim 21: It’s unclear whether “the video frames” recited in line 3 refers to “a plurality of video frames” in claim 18, “video frames” in claim 21, or some other video frames. For the purpose of examination, this term has been interpreted as referring to any video frames. Claim 22: It’s unclear whether “the video frames” recited in line 1 refers to “a plurality of video frames” in claim 18, any one of the “video frames” in claim 19, or some other video frames. For the purpose of examination, this term has been interpreted as referring to any video frames. Dependent claims are also rejected for inheriting the deficiencies of the claims from which they depend on. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-17 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claims do not fall within at least one of the four categories of patent eligible subject matter because the claims do not include at least one hardware element in the bodies. Claim 1 recites a server comprises one or more processors, Examiner notes that the specification states that the one or more processors may be implemented in software (e.g., [0025] of specification), thus, under the broadest reasonable interpretation, the claimed one or more processors can be interpreted as software. The claimed invention is directed to non-statutory subject matter, software per se. Claims 2-17 are dependent from claim 1 and are rejected under similar rationale. Claims 1-22 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claim 1 recites receive a video captured by an imaging device, wherein the video comprises of a plurality of video frames; analyse each of the video frames captured by the imaging device and detect reference entity in each of the video frames; and anonymize at least a portion of each of the video frames in which the reference entity is absent. The above limitation, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind then it falls within the mental processes grouping of abstract ideas. Accordingly, the claim recites an abstract idea. This judicial exception is not integrated into a practical application. In particular, the claim only recites the additional elements of a server comprising one or more processors that perform the steps in the claim, these additional elements are recited at a high-level of generality such that they amount no more than mere instructions to apply the exception using generic computer components. Accordingly, the combination of the above additional elements does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Furthermore, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of a server comprising one or more processors amount to no more than mere instructions to apply the exception using generic computer components; mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim does not recite any other additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible. Dependent claims 2-17 further clarify the concept recited in claim 1, however, the clarification still falls under the concept recited in claim 1 and does not amount to significantly more than the judicial exception, thus, the claims are directed to an abstract idea. Claim 18 contains similar elements as recited in claim 1 and is rejected for similar reasons. Dependent claims 19-22 contain similar elements as recited in claims 2-17 and are rejected for similar reasons. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 3, and 16-18 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Bishop et al. (US 2021/0012032). Claim 1, Bishop teaches: A system for anonymizing videos, the system comprising: a server, wherein the server comprises one or more processors configured to: receive a video captured by an imaging device, wherein the video comprises of a plurality of video frames; (e.g., [0011], “receiving a data stream captured by the surgical robotic system, the data stream comprising data relating to a surgical procedure and comprising personally-identifiable data” [0013], “generating, in dependence on the determined personally-identifiable feature and the received data stream, an anonymised data stream omitting the personally-identifiable data” [0020], “Generating the anonymised data stream may comprise, in dependence on the determined personally-identifiable feature, one or more of: removing a data portion from the received data stream, the removed data portion comprising personally-identifiable data, and masking a data portion of the received data stream, the masked data portion comprising personally-identifiable data. Masking the data portion may comprise one or more of: blurring the data portion; and replacing values of data in the data portion with mask values. The data portion may comprise one or more partial frame of data. The data portion may comprise one or more frame of data. Determining the personally-identifiable feature may comprise determining a partial frame of data to which the personally-identifiable feature relates”) analyse each of the video frames captured by the imaging device and detect reference entity in each of the video frames; and (e.g., [0060], “Once the endoscope is inserted through the port, it will capture video of the internal surgical site (i.e. video from inside the surgical field), which is highly unlikely to contain personally-identifiable data. This will remain the case until the endoscope is retracted from the port (when the video will transition from being inside the surgical field to being outside the surgical field), when it will again begin to capture video of the operating room, potentially including personally-identifiable data. The ‘safe’ data can therefore comprise video captured between insertion of the endoscope through the port and retraction of the endoscope from the port, i.e. video of a target operative field, and suitably nothing else. This approach can thus involve determining when the endoscope is inserted through the port and when it is retracted from the port and removing or otherwise anonymising any portion of video data captured between these identified times, i.e. when the endoscope is not inserted through the port” [0113], “(i) Whether the Video Data Comprises a Circle that Grows or Shrinks” [0114], “As the endoscope passes through the port towards the surgical site, the port circumference will appear as a circle in the endoscope video that expands past the screen boundaries. As the endoscope is retracted from the port, the port circumference will appear as a circle in the endoscope video that contracts from beyond the screen boundary. The circle may leave the field of view of the endoscope to one side as the endoscope is moved away from the port. Thus, detecting, for example by image recognition, whether the video image comprises one or other (or both) of an expanding and a contracting circle can enable the detection of whether the endoscope tip passes inwardly or outwardly through the port (or both). Determination of this feature can therefore enable the data anonymiser to determine whether the endoscope tip is transitioning into or out of the surgical field”) anonymize at least a portion of each of the video frames in which the reference entity is absent. (e.g., [0114], “Thus, detecting, for example by image recognition, whether the video image comprises one or other (or both) of an expanding and a contracting circle can enable the detection of whether the endoscope tip passes inwardly or outwardly through the port (or both). Determination of this feature can therefore enable the data anonymiser to determine whether the endoscope tip is transitioning into or out of the surgical field. Thus this determination can enable the data anonymiser to generate the anonymised data accordingly. Data captured before endoscope insertion and/or after endoscope retraction can be anonymised. The data in between these events can be indicated to be ‘safe’ and not require anonymization”: frame data captured with the circle absent is anonymised) Claim 3, Bishop teaches: wherein the one or more processors are configured to: tag a video frame with a first value if one or more reference entity is detected in the video frame; and tag a video frame with a second value if reference entity is absent in the video frame. (e.g., [0059]-[0060]) Claim 16, Bishop teaches: wherein the one or more processors are configured to detect plurality of reference entities when the video frames comprise of multiple reference entities. (e.g., [0111], [0117], [0134]) Claim 17, Bishop teaches: wherein the one or more processors are configured to: create a virtual area around the plurality of reference entities, wherein the virtual area covers the plurality of reference entities; and anonymize portions of the video frames other than the virtual area, wherein the virtual area is determined based on position and orientation of the reference entities. (e.g., [0106], [0114], [0135]) Claim 18, this claim is directed to a method containing similar limitations as recited in claim 1 and is rejected for similar rationale. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Bishop et al. (US 2021/0012032) in view of Sanmugalingham (US 2017/0255751). Claim 2, Bishop teaches the reference entity configured to be detected by the one or more processors of the server (see above) and does not appear to explicitly teach but Sanmugalingham teaches: a fiducial marker. (e.g., [0055], [0057]) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings described by Sanmugalingham into the invention of Bishop, and the motivation for such an implementation would be for the purpose of proving data collection, storage and management that anonymizes and aggregates patient records, and permits the aggregated patient records to be analyzed while maintaining privacy and ethical considerations (Sanmugalingham [0009]-[0010]). Claims 4-5, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Bishop et al. (US 2021/0012032) in view of Bae et al. (US 2015/0326884). Claim 4, Bishop teaches the plurality of video frames (see above) and does not appear to explicitly teach but Bae teaches: identify incorrectly tagged video frames among a plurality of video frames by analysing tagged values of immediately preceding and following video frames; and replace the value of the incorrectly tagged video frames with a value of one of the preceding or following tagged video frames. (e.g., [0022]-[0023]) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings described by Bae into the invention of Bishop, and the motivation for such an implementation would be for the purpose of preventing unpleasant user experiences in high-definition video environments (Bae [0002]). Claim 5, Bishop-Bae teaches: wherein: the incorrectly tagged video frame is determined by inspecting the tagged values of a set of consecutive preceding and following video frames, wherein a video frame is confirmed to be incorrectly tagged when tagged value of the video frame is different from the tagged values of the set of consecutive preceding and following video frames; and the tagged value of the determined incorrectly tagged video frame is replaced with the value of at least one of the video frames among the inspected set of consecutive preceding and following video frame. (e.g., Bae [0022]-[0023]) Same motivation as presented in claim 4 would apply. Claim 19, Bishop teaches tagging a video frame with a first value if one or more reference entity is detected in the video frame; tagging a video frame with a second value if reference entity is absent in the video frame (e.g., [0059]-[0060]) ) and does not appear to explicitly teach but Bae teaches: identifying incorrectly tagged video frames among the plurality of video frames by analysing tags of immediately preceding and following video frames; and replacing the value of the incorrectly tagged video frames with a value of one of the preceding or following video frames, wherein: the incorrectly tagged video frame is determined by inspecting the tagged values of a set of consecutive preceding and following video frames, wherein a video frame is confirmed to be incorrectly tagged when tagged value of the video frame is different from the tagged values of the set of consecutive preceding and following video frames; and the tagged value of the determined incorrectly tagged video frame is replaced with the value of at least one of the video frames among the inspected set of consecutive preceding and following video frame. (e.g., [0022]-[0023]) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings described by Bae into the invention of Bishop, and the motivation for such an implementation would be for the purpose of preventing unpleasant user experiences in high-definition video environments (Bae [0002]). Claims 6-8, 10, 12, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Bishop et al. (US 2021/0012032) in view of Cavallaro (US 6,252,632). Claim 6, Bishop teaches the reference entity in each of the video frames (see above) and does not appear to explicitly teach but Cavallaro teaches: predict orientation in each of video frames; predict in each of the video frames; and predict size in each of the video frames. (e.g., col. 10 ll. 35-57, col. 20 ll. 34-60, col. 22 ll. 20-35) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings described by Cavallaro into the invention of Bishop, and the motivation for such an implementation would be for the purpose of enhancing a video presentation of an object (Cavallaro col. 1). Claim 7, Bishop-Cavallaro teaches: predict point of view of the reference entity in each of the video frames based on the detected orientation of the reference entity; and predict field of view captured in each of the video frames based on the predicted size and the position of the reference entity. (e.g., Cavallaro col. 10 ll. 35-57, col. 20 ll. 34-60, col. 22 ll. 20-35) Same motivation as presented in claim 6 would apply. Claim 8, Bishop-Cavallaro teaches: wherein for a predetermined point of view when at least a second portion of the video frames comprises of the reference entity, the one or more processors are configured to anonymize portions of the video frames other than the second portion. (e.g., Bishop [0106], [0108], [0114], [0135]) Claim 10, Bishop-Cavallaro teaches: wherein for a predetermined field of view when at least a fourth portion of the video frames comprises of the reference entity, the one or more processors are configured to anonymize portions of the video frames other than the fourth portion. (e.g., Bishop [0106], [0114], [0135]) Claim 12, Bishop-Cavallaro teaches: wherein for a predetermined field of view and a predetermined point of view the one or more processors are configured to anonymize entire video frames. (e.g., Bishop [0106], [0108], [0114], [0135]) Claim 20, this claim is directed to a method containing similar limitations as recited in claim 6 and is rejected using the same rationale to combine the references. Claims 9, and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Bishop et al. (US 2021/0012032) in view of Cavallaro (US 6,252,632) further in view of Venkataraman et al. (US 2020/0372180). Claim 9, Bishop-Cavallaro teaches wherein for a predetermined point of view when at least a third portion of the video frames comprises of the reference entity, the one or more processors are configured to anonymize portions of the video frames other than the third portion (e.g., Bishop [0106], [0114], [0135]) and does not appear to explicitly teach but Venkataraman teaches: zoom into at least a third portion of video frames. (e.g., [0055]) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings described by Venkataraman into the invention of Bishop-Cavallaro, and the motivation for such an implementation would be for the purpose of anonymizing raw surgical procedure videos to de-identify personally identifiable information and providing the anonymized surgical procedure videos for various research purposes (Venkataraman [0001]). Claim 11, Bishop-Cavallaro teaches wherein for a predetermined field of view when at least a fifth portion of the video frames comprises of the reference entity, the one or more processors are configured to anonymize portions of the video frames other than the fifth portion (e.g., Bishop [0106], [0114], [0135]) and does not appear to explicitly teach but Venkataraman teaches: zoom into at least a fifth portion of video frames. (e.g., [0055]) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings described by Venkataraman into the invention of Bishop-Cavallaro, and the motivation for such an implementation would be for the purpose of anonymizing raw surgical procedure videos to de-identify personally identifiable information and providing the anonymized surgical procedure videos for various research purposes (Venkataraman [0001]). Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Bishop et al. (US 2021/0012032) in view of Cavallaro (US 6,252,632) further in view of Yao et al. (US 2021/0366106). Claim 13, Bishop-Cavallaro teaches wherein for a predetermined field of view and a predetermined point of view when a sixth portion of the video frames comprises of the reference entity, the one or more processors are configured to: anonymize portion of the video frames other than a predetermined area; the predetermined area is calculated taking the reference entity as a reference point; (e.g., Bishop [0106], [0114], [0135]) and does not appear to explicitly teach but Yao teaches: a predetermined area is calculated in both X-axis and Y-axis of frames. (e.g., [0068], [0256], [0261]) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings described by Yao into the invention of Bishop-Cavallaro, and the motivation for such an implementation would be for the purpose of protecting confidentiality of patients (Yao [0066]). Claims 14-15, and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Bishop et al. (US 2021/0012032) in view of Bae et al. (US 2015/0326884) further in view of Bykov et al. (US 2023/0060780). Claim 14, Bishop-Bae teaches wherein the one or more processors are configured to analyse the video frames with corrected tag values, wherein the one or more processors are configured to: compare tag value of each video frame with tag value of immediately preceding video frame; and replace tag values of video frames between the video frame with the first label and the video frame with the second label, wherein the tag values are replaced with tag value of the frame immediately preceding the frame with the first label (e.g., Bae [0022]-[0023], same motivation as presented in claim 4 would apply) and does not appear to explicitly teach but teaches Bykov teaches: label a video frame with a first label, when tag values of two adjacent video frames are different, wherein the first label determines change in tag value; label a video frame with a second label, when tag values of two subsequent adjacent video frames are different, wherein the second label determines change in tag value. (e.g., [0061]) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings described by Bykov into the invention of Bishop-Bae, and the motivation for such an implementation would be for the purpose of improving the functioning of a computer itself by reducing a number of frames processed to more efficiently detect shot changes, and improving the technical field of video processing by providing more efficient shot-change detection (Bykov [0012]). Claim 15, Bishop-Bae-Bykov teaches: wherein the one or more processors are configured to anonymize video frames based on the updated tag values of the video frames. (e.g., Bishop [0114]; Bykov [0035]) Claim 22, this claim is directed to a method containing similar limitations as recited in claim 14 and is rejected using the same rationale to combine the references. Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Bishop et al. (US 2021/0012032) in view of Venkataraman et al. (US 2020/0372180). Claim 21, Bishop teaches anonymizing portions of the video frames other than portion comprising reference entity, for a predetermined point of view and a predetermined field of view (e.g., [0106], [0114], [0135]) and does not appear to explicitly teach but Venkataraman teaches: zooming into at least a portion of video frames. (e.g., [0055]) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings described by Venkataraman into the invention of Bishop, and the motivation for such an implementation would be for the purpose of anonymizing raw surgical procedure videos to de-identify personally identifiable information and providing the anonymized surgical procedure videos for various research purposes (Venkataraman [0001]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: US 2021/0134436 teaches systems and methods for anonymizing clinical data. Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMIE C LIN whose telephone number is (571)272-7752. The examiner can normally be reached M-F 9:00AM -5:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, GELAGAY SHEWAYE can be reached at (571)272-4219. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AMIE C. LIN/Primary Examiner, Art Unit 2436
Read full office action

Prosecution Timeline

Oct 08, 2024
Application Filed
Jan 07, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603897
ATTACK CHAIN IDENTIFICATION VIA MISCONFIGURATIONS IN CLOUD RESOURCES
2y 5m to grant Granted Apr 14, 2026
Patent 12598468
MULTI-CHANNEL DEVICE CONNECTION
2y 5m to grant Granted Apr 07, 2026
Patent 12598194
FINE GRANULARITY CONTROL OF DATA ACCESS AND USAGE ACROSS MULTI-TENANT SYSTEMS
2y 5m to grant Granted Apr 07, 2026
Patent 12598184
METHODS FOR CONSTRUCTING TRUSTED GRID, TRUSTED GRIDS, AND APPLICATION INTERACTION METHODS THEREON
2y 5m to grant Granted Apr 07, 2026
Patent 12587505
SECURE AND PRIVATE NETWORK COMMUNICATIONS
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
85%
Grant Probability
99%
With Interview (+30.2%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 300 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month