Prosecution Insights
Last updated: April 19, 2026
Application No. 17/911,093

DIGITAL PATHOLOGY RECORDS DATABASE MANAGEMENT

Non-Final OA §103
Filed
Sep 12, 2022
Examiner
WORKU, SARON MATTHEWOS
Art Unit
2408
Tech Center
2400 — Computer Networks
Assignee
Memorial Sloan Kettering Cancer Center
OA Round
3 (Non-Final)
67%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
12 granted / 18 resolved
+8.7% vs TC avg
Strong +54% interview lift
Without
With
+53.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
30 currently pending
Career history
48
Total Applications
across all art units

Statute-Specific Performance

§101
2.8%
-37.2% vs TC avg
§103
46.6%
+6.6% vs TC avg
§102
37.0%
-3.0% vs TC avg
§112
10.5%
-29.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 18 resolved cases

Office Action

§103
DETAILED ACTION This office action is in response to applicant’s submission filed on January 13, 2026. Claims 5 and 15 are canceled. Claims 1-4, 6-14, and 16-20 are pending and rejected. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on January 13, 2026 has been entered. Response to Amendment This communication is in response to the amendment filed on January 13, 2026. The Examiner has acknowledged the amended claims 1, 4, 11, 14, and 20. Claims 1-4, 6-14, and 16-20 are pending and are rejected. Response to Arguments Applicant’s Arguments (Remarks) filed January 13, 2026 have been fully considered, but are not persuasive. Applicant argues that the cited art does not disclose each and every limitation of amended claims 1, 4, 11, 14, and 20. Examiner respectfully disagrees. All amendments set forth were just reworded from the original claim language or added details that were already mapped and therefore have been rejected using the same prior art. Examiner also notes that after the query is performed (search query), the results that are de-identified digital pathology record that includes the biomedical image and the policy and format are in accordance with the data as it is in accordance with the stored information in order to stay consistent, as stated in the Final Rejection mailed 07/15/2025. See also 103 rejection below. The remainder of the arguments set forth by the applicant are not persuasive due to the new grounds of rejection. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 6-14, and 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over US 2008/0120296 A1 to Kariathungal et al. (hereinafter, “Kariathungal”) in view of US 2012/0041791 A1 to Gervais et al. (hereinafter, “Gervais”) and in further view of US 2016/0307063 A1 to Bright et al. (hereinafter, “Bright”). Regarding claim 1, Kariathungal discloses: A method of maintaining databases of biomedical images, comprising: aggregating, by one or more processors, a plurality of digital pathology records from a plurality of data sources onto a database, each of the plurality of digital pathology records generated by a data source of the plurality of data sources in accordance with a format used by the data source (“The data import function 206 aggregates and cleanses de-identified patient data from multiple sites and then stores the data into a staging area 208. Data received from multiple PCP systems 108 is normalized, checked for validity and completeness. and either corrected or flagged as defective” [0032]; “The method further includes crawling one or more databases associated with one or more clinical information systems to identify relevant data based on the search query” [0014]), each of the plurality of digital pathology records identifying a biomedical image of a sample and data comprising a subject from which the sample is obtained (“Patient data is sensitive and confidential, and therefore, specific identifying information must be removed prior to transporting it from a PCP site to a central data warehouse” [0009]; “Images and measurements are taken and sent to the CVIS server. The reading physician (e.g., an echocardiographer) sits down at a review station and pulls the patient's TTE study. The echocardiographer then begins to review the images and measurements and creates a complete medical report on the study” [0005]; “Even if clinical or image-related information is organized, current systems often organize data in a format determined by developers that is unusable by one or more medical practitioners in the field. Additionally, information may be stored in a format that does not lend itself to data retrieval and usage in other contexts. Thus, a need exists to structure data and instructions in a way that is easier to comprehend and utilize” [0008]; “Search capabilities may include searching images and/or locating similar images from search results” [0063] [Examiner notes that these texts show that biomedical images derived from samples are included in the digital pathology records of the patient. Also it shows how data stored within the record are associated with the image as well]). receiving, by the one or more processors from a client device, a query identifying a selection criterion for retrieving digital pathology records from the database (“At step 820, a search query is formed from the one or more search terms. For example, the interface may combine terms in a format suitable for searching. At step 830, the search query is sent to the search engine for searching. For example, the interface routes the query to the search engine for searching” [FIG. 8, 0072]); accessing, by the one or more processors, the database to identify a subset of digital pathology records from the plurality of digital pathology records using the selection criterion identified by the query (“At step 840, term(s) of the search query are used to search a clinical database, such as an EMR database. A search crawler, for example, may search one or more clinical databases using the search query terms(s) in a free text search” [0073]); for each digital pathology record of the subset (“Search results are then provided to the search index 640, which may keep a running track of search results” [0068]): and providing, by the one or more processors to the client device (“At step 850, search results are formatted for output. For example, search results may be formatted for display via a web-based interface” [0074]), the de­identified digital pathology record comprising the biomedical image and the modified The formatter may de-identify patient and/or physician data from the search results, for example” [0013] [Examiner notes that after the query is performed (search query), the results that are de-identified digital pathology record that includes the biomedical image]). Kariathungal does not explicitly disclose: identifying, by the one or more processors, a data source of the plurality of data sources that generated the digital pathology record; selecting, by the one or more processors, from a plurality of de-identification policies, a de-identification policy to apply to the digital pathology record based on the data source; identifying, by the one or more processors, from a first file corresponding to the digital pathology record, (i) a first portion comprising one or more bytes corresponding to metadata identifying the subject and modifying, by the one or more processors, the one or more bytes in the first portion of the file corresponding to the metadata the selected de-identification policy and the format used by the data source to generate a second file corresponding to a de-identified digital pathology record; and the second file corresponding to the de­identified digital pathology record comprising the biomedical image and the modified one or more bytes corresponding to the metadata However, Gervais discloses: identifying, by the one or more processors, a data source of the plurality of data sources that generated the digital pathology record (“The table defines a data source identifier 402, a name 404, a claim amount 406, a policy identifier 408, and a claim status 410” [0046]); selecting, by the one or more processors, from a plurality of de-identification policies, a de­identification policy to apply to the digital pathology record based on the data source (“According to still other embodiments, the data de-identification service will store information about the selected obfuscation methods and automatic replacement in connection with the original data. For example, the service might remember which fields contained personal information and which obfuscation methods were used for each of those fields” [0049] [Examiner notes that the methods of obfuscation and the policies to de-identify are inherently used for the same purpose and therefore are seen as equivalent]); identifying, by the one or more processors, from a first file corresponding to the digital pathology record, (i) a first portion comprising one or more bytes corresponding to metadata identifying the subject (“In some embodiments, original data is received from a data store, and metadata data associated with the original data is analyzed to create an inventory of elements in the original data. The elements in the inventory may then be searched for potential personal information based on at least one character matching rule. An obfuscation rule may be evaluated, and scripts may be automatically created. The scripts may, for example, be used during a database refresh process to replace the potential personal information in the original data with fictional data in accordance with (i) a result of said evaluation and (ii) an obfuscation method selected from a set of potential obfuscation methods” [0020]; “FIG. 7 illustrates another method according to some embodiments of the present invention. At 702, original data is received from a data store. Metadata data and/or summary data associated with the original data is analyzed to create an inventory of elements in the original data at 704, and elements in the inventory are searched for potential personal information based on at least one character matching rule at 706. A character matching rule might be associated with, for example, a number of characters in a string or other data factors (e.g., a size, type, and/or length of data)” [0055] [Examiner notes that the metadata data associated with the data is seen as the portion comprising one or more bytes corresponding to metadata. This metadata identifies the subject as the search can be for personal information, so the metadata naturally includes subject-identifying data]) and modifying, by the one or more processors, the one or more bytes in the first portion of the file corresponding to the metadata (“In some embodiments, original data is received from a data store, and metadata data associated with the original data is analyzed to create an inventory of elements in the original data. The elements in the inventory may then be searched for potential personal information based on at least one character matching rule. An obfuscation rule may be evaluated, and scripts may be automatically created. The scripts may, for example, be used during a database refresh process to replace the potential personal information in the original data with fictional data in accordance with (i) a result of said evaluation and (ii) an obfuscation method selected from a set of potential obfuscation methods” [0020] [Examiner notes that potential personal information is the kind of data stored in metadata. Since it is replacing it with fictional data the original metadata is overwritten and the underlying bytes storing those values are changed. There is no way to replace data in a file without modifying the bytes that encode it]), generate a second file corresponding to a de­identified digital pathology record (“In this way, the server can later retrieve updated original data from the data source, and then automatically replace potential personal information in the updated original data with fictional data in accordance with the stored information about the selected obfuscation methods and automatic replacement (e.g., so that the replaced fields and the replacement techniques remain consistent when the data is updated” [0049] [Examiner notes that the policy and format here are in accordance with the data as it is in accordance with the stored information in order to stay consistent]; “In this case, a data de-identification service engine 1450 may retrieve information from both a first original data source 1420A and a second original data source 1420B. In some cases, the data de-identification service engine 1450 may retrieve primary information from the first original data source 1420A and supplemental information from the second original data source 1420B. Based on the retrieved information, the data de-identification service engine 1450 creates .clean. versions (without personal information) as a first data store 1422A and a second data store 14228” [0070]); and the second file corresponding to the de­identified digital pathology record comprising the biomedical image and the modified one or more bytes corresponding to the metadata In some embodiments, original data is received from a data store, and metadata data associated with the original data is analyzed to create an inventory of elements in the original data. The elements in the inventory may then be searched for potential personal information based on at least one character matching rule. An obfuscation rule may be evaluated, and scripts may be automatically created. The scripts may, for example, be used during a database refresh process to replace the potential personal information in the original data with fictional data in accordance with (i) a result of said evaluation and (ii) an obfuscation method selected from a set of potential obfuscation methods” [0020] [Examiner notes that potential personal information is the kind of data stored in metadata. Since it is replacing it with fictional data the original metadata is overwritten and the underlying bytes storing those values are changed. There is no way to replace data in a file without modifying the bytes that encode it]; “In this case, a data de-identification service engine 1450 may retrieve information from both a first original data source 1420A and a second original data source 1420B. In some cases, the data de-identification service engine 1450 may retrieve primary information from the first original data source 1420A and supplemental information from the second original data source 1420B. Based on the retrieved information, the data de-identification service engine 1450 creates .clean. versions (without personal information) as a first data store 1422A and a second data store 14228” [0070]). Thus, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains, to combine the method of Kariathungal’s medical data warehouse with the added structure of Gervais’ data source specific de-identification warehouse in order to retrieve data in a similar format to the original records as Gervais suggests (“The test data store 122 might include similar information, formatted in a similar way, as the original data source 120 without including, for example, any actual PII or PHI values (or other types of information) that were included in the original data source 120” [0033]). Kariathungal-Gervais do not explicitly disclose: (ii) one or more second portions comprising image data for the biomedical image in accordance with the format; However, Bright discloses: (ii) one or more second portions comprising image data for the biomedical image in accordance with the format (“Once the user has drawn the rectangle defining the redaction region, then the user can instruct the PMDD to perform the redaction(s) and display via a live preview GUI, a live preview screen such as is shown in FIG. 6, in order for the user to assess whether the change to the image removes all the sensitive data intended to be removed without removing an excessive amount of other information in the image” [FIG. 5-6, 0054]); “Once the user is happy with the results seen in the preview screen, the de-identifier containing the redaction rule based on the redaction region can be saved and used to de-identify study data, or the user may add one or more additional redaction rules specifying additional redaction regions to the de-identifier” [0054] [Examiner notes that after the query is performed (search query), the results that are de-identified digital pathology record that includes the biomedical image and the policy and format are in accordance with the data as it is in accordance with the stored information in order to stay consistent]). Thus, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains, to integrate Bright’s dicom redaction region with Kariathungal’s medical data warehouse, modified by Gervais’ data source specific de-identification, in order to in order to retrieve data in a similar format to the original records as Gervais suggests [0033]). Claim 11 recites substantially the same limitation as claim 1, in the form of a system for maintaining databases of biomedical images implementing the corresponding method, therefore it is rejected under the same rationale. Regarding claims 2 and 12, Kariathungal-Gervais-Bright disclose all limitations of claims 1/11. Furthermore, Gervais discloses: identifying, by the one or more processors for each digital pathology record of the subset, in accordance with the de­identification policy, the data to be modified in the digital pathology record, the de-identification policy specifying at least one of a truncation, a removal, or an overwrite of at least a corresponding portion of the data (“The plurality of potential obfuscation methods might include, for example, a random assignment to de-identify information in the original data source (e.g., random characters might replace actual characters or a random name might be selected to replace an actual customer name). Other types of obfuscation methods might include concatenation, truncation (e.g., replacing an actual telephone number with .{555) 555-nnnn. where .n. represents the actual integers of the customer's telephone number” [Gervais 0045]). Thus, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains, to combine the method of Kariathungal’s medical data warehouse with the added structure of Gervais’ data source specific de-identification warehouse, modified by Gervais’ data source specific de-identification, in order to protect personal information as Gervais suggests (“For example, Personally Identifiable Information (PII) and/or Personal Health Information (PHI) as defined in various US governmental regulations must be protected in accordance with specific sets of rules” [0003]). Regarding claims 3 and 13, Kariathungal and Gervais-Bright disclose all limitations of claims 1/11. Furthermore, Gervais discloses: identifying, by the one or more processors, using pattern recognition, additional information to modify from the digital pathology record subsequent to modifying the data in accordance with the de-identification policy (“Metadata data and/or summary data associated with the original data is analyzed to create an inventory of elements in the original data at 704, and elements in the inventory are searched for potential personal information based on at least one character matching rule at 706” [Gervais 0055]); and modifying, by the one or more processors, the additional information in the digital pathology record to obtain the de-identified digital pathology record (“Moreover, the system, a user, or a system designer might define a way to recognize potential personal information and/or define how an obfuscation method should be selected” [Gervais FIG. 5, 0053]). The reason for obviousness is stated above in claims 2/12. Regarding claims 4 and 14, Kariathungal-Gervais-Bright disclose all limitations of claims 1/11. Furthermore, Gervais discloses: identifying, by the one or more processors for a second digital pathology record of the subset, a third file containing second metadata and a fourth file containing a second biomedical image for the second digital pathology record in accordance with a second format used by a second data source to generate the second digital pathology record (“In this case, a data de-identification service engine 1450 may retrieve information from both a first original data source 1420A and a second original data source 1420B. In some cases, the data de-identification service engine 1450 may retrieve primary information from the first original data source 1420A and supplemental information from the second original data source 1420B” [Gervais 0070] [Examiner notes that it has already been taught that the file could contain a biomedical image above]; “In some embodiments, original data is received from a data store, and metadata data associated with the original data is analyzed to create an inventory of elements in the original data. The elements in the inventory may then be searched for potential personal information based on at least one character matching rule. An obfuscation rule may be evaluated, and scripts may be automatically created. The scripts may, for example, be used during a database refresh process to replace the potential personal information in the original data with fictional data in accordance with (i) a result of said evaluation and (ii) an obfuscation method selected from a set of potential obfuscation methods” [Gervais 0020] [Examiner notes that potential personal information is the kind of data stored in metadata. Since it is replacing it with fictional data the original metadata is overwritten and the underlying bytes storing those values are changed. There is no way to identify and replace data in a file without modifying the bytes that encode it]); and , by the one or more processors, the second metadata contained in the first file separate from the second file in accordance with the de-identification policy selected for the second digital pathology record (“Based on the retrieved information, the data de-identification service engine 1450 creates .clean. versions (without personal information) as a first data store 1422A and a second data store 14228” [Gervais 0070]; “In some embodiments, original data is received from a data store, and metadata data associated with the original data is analyzed to create an inventory of elements in the original data. The elements in the inventory may then be searched for potential personal information based on at least one character matching rule. An obfuscation rule may be evaluated, and scripts may be automatically created. The scripts may, for example, be used during a database refresh process to replace the potential personal information in the original data with fictional data in accordance with (i) a result of said evaluation and (ii) an obfuscation method selected from a set of potential obfuscation methods” [Gervais 0020] [Examiner notes that potential personal information is the kind of data stored in metadata. Since it is replacing it with fictional data the original metadata is overwritten and the underlying bytes storing those values are changed. There is no way to identify and replace data in a file without modifying the bytes that encode it]). The reason for obviousness is stated above in claim 1. Regarding claims 6 and 16, Kariathungal-Gervais-Bright disclose all limitations of claims 1/11. Furthermore, a combination of Kariathungal-Gervais disclose: aggregating a plurality of location identifiers from the plurality of data sources, the plurality of location identifiers identifying the biomedical image and the data for each of the plurality of digital pathology records (“In an exemplary embodiment of the present invention, the following identifiers are removed or transformed in order to create de-identified data that would be classified under the HIPAA definition as fully de-identified data… unified resource locator (URL)” [Kariathungal 0036] [Examiner notes that in applicant’s specification (paragraph 0042), a URL is seen as an example of a location identifier in order to identify the biomedical medical and/or file of the data source present]), and wherein accessing the database further comprises retrieving the subset of digital pathology records from one or more of the plurality of data sources using a subset of location identifiers corresponding to the subset of digital pathology records (“For example, when the original data is associated with a spreadsheet or database (e.g., a multidimensional hypercube of business information), the automatic search performed at 304 might be based on schema information, table information, metadata, and/or summary data” [Gervais 0044]). The reason for obviousness is stated above in claim 1. Regarding claims 7 and 17, Kariathungal-Gervais-Bright disclose all limitations of claims 1/11. Furthermore, Kariathungal discloses: accessing the database to identify the subset of digital pathology records from the plurality of digital pathology records, each of the subset of digital pathology records having an indication of permission for use (“For instance, an administrator may have access to the entire system and have authority to modify portions of the system and a PCP staff member may only have access to view a subset of the data warehouse records for particular patients” [Kariathungal 0029]). Regarding claims 8 and 18, Kariathungal-Gervais-Bright disclose all limitations of claims 1/11. Furthermore, Gervais discloses: maintaining the plurality of digital pathology records retrieved from the plurality of data sources, without removal of the data identifying the subject in each of the plurality of digital pathology records prior to receiving the query (“Note that it might be beneficial for the enterprise to allow access to information in the original data store 120, including personal information, even when the information is not needed to process a transaction with a customer” [Gervais 0032]). The reason for obviousness is stated above in claim 1. Regarding claim 9 and 19, Kariathungal-Gervais-Bright disclose all limitations of claims 1/11. Furthermore, Gervais discloses: aggregating the plurality of digital pathology records, each of the plurality of digital pathology records identifying a part description, an image identifier, and a descriptor (“Note that automatic search for potential personal information may utilize any type of information within, or about, the data source. For example, when the original data is associated with a spreadsheet or database (e.g., a multidimensional hypercube of business information), the automatic search performed at 304 might be based on schema information, table information, metadata, and/or summary data” [Gervais 0044]). Kariathungal-Gervais do not explicitly disclose: aggregating the plurality of digital pathology records, each of the plurality of digital pathology records identifying the data identifying a date at which the biomedical image of the sample from the subject is acquired, a part description, an image identifier, and a descriptor. However, Bright discloses: aggregating the plurality of digital pathology records, each of the plurality of digital pathology records identifying the data identifying a date at which the biomedical image of the sample from the subject is acquired, a part description, an image identifier, and a descriptor (“The pseudonym value for a metadata element that is a date or time associated with the production of the DICOM image may be a different date or time that is offset from the value of the metadata element by an offset value, where the de-identification program generates pseudonym values for all metadata element values that are dates or times associated with the production of the DICOM image by adding the same offset value to the metadata element values” [Bright 0022]). Thus, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains, to integrate Bright’s dicom metadata date pseudonyms with Kariathungal’s medical data warehouse, modified by Gervais’ data source specific de-identification, in order to in order to allow for consistent pseudonym values between documents with the same patient as Bright suggests (“The values of the metadata element specified by the substitution rule may be indexed in the pseudonym memory by DICOM patient ID, and a stored pseudonym value may then be considered to be suitable if it has previously been used to replace the value of the metadata element in a DICOM file associated with the same patient ID” [0020]). Regarding claims 10 and 20, Kariathungal-Gervais-Bright disclose all limitations of claims 1/11. Furthermore, Gervais discloses: storing, by the one or more processors, for each digital pathology record of the subject, the de-identified digital pathology record onto the database to replace the corresponding digital pathology record of the subject (“According to other embodiments, the data de-identification service engine 150 might instead re-write .scrubbed. data back into the original data source 120 and/or use a copy of the original data source 120 and/or the test data store 122” [Gervais 0033]). The reason for obviousness is stated above in claim 1. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure: Kakhovsky et al. (US 2021/0343379 A1) teaches a method for removing personal data from a medical record includes receiving a first digital medical record associated with a first patient, the first digital medical record including personal data and medical data; generating a first fingerprint by applying a cryptographic function to at least a portion of the personal data; transferring the first fingerprint to one or more computing devices; identifying a first alias associated with the first fingerprint, wherein identifying the first alias includes at least one of determining that the first fingerprint was previously stored along with the first alias, and generating the first alias and storing the first fingerprint and the first alias; transferring the first alias to the workstation of the first user; and generating a first clean digital medical record based on the first alias and the first digital medical record. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SARON MATTHEWOS WORKU whose telephone number is (703)756-1761. The examiner can normally be reached Monday - Friday, 9:30 am - 6:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Linglan Edwards can be reached on 571-270-5440. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SARON MATTHEWOS WORKU/Examiner, Art Unit 2408 /LINGLAN EDWARDS/Supervisory Patent Examiner, Art Unit 2408
Read full office action

Prosecution Timeline

Sep 12, 2022
Application Filed
Mar 27, 2025
Non-Final Rejection — §103
Jun 02, 2025
Interview Requested
Jun 09, 2025
Examiner Interview Summary
Jun 09, 2025
Applicant Interview (Telephonic)
Jul 02, 2025
Response Filed
Jul 11, 2025
Final Rejection — §103
Sep 22, 2025
Interview Requested
Oct 02, 2025
Applicant Interview (Telephonic)
Oct 14, 2025
Examiner Interview Summary
Jan 13, 2026
Request for Continued Examination
Jan 13, 2026
Response after Non-Final Action
Jan 25, 2026
Response after Non-Final Action
Jan 29, 2026
Non-Final Rejection — §103
Mar 16, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12547939
SYSTEM AND A METHOD FOR PERFORMING A PRIVACY-PRESERVING DISTRIBUTION SIMILARITY TESTS BETWEEN A PLURALITY OF DATASETS
2y 5m to grant Granted Feb 10, 2026
Patent 12524579
SRAM PHYSICALLY UNCLONABLE FUNCTION (PUF) MEMORY FOR GENERATING KEYS BASED ON DEVICE OWNER
2y 5m to grant Granted Jan 13, 2026
Patent 12513013
Dynamic Cross-Node Multidimensional Hashchain Network-Based Meta-Content Enabler for Real-Time Content Based Anomaly Detection
2y 5m to grant Granted Dec 30, 2025
Patent 12475240
PROTECTED CONTENT CONTAMINATION PREVENTION
2y 5m to grant Granted Nov 18, 2025
Patent 12470519
INTRA-VLAN TRAFFIC FILTERING IN A DISTRIBUTED WIRELESS NETWORK
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
67%
Grant Probability
99%
With Interview (+53.6%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 18 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month