Prosecution Insights
Last updated: April 19, 2026
Application No. 18/374,657

EDITABLE FORM FIELD DETECTION

Non-Final OA §103§112
Filed
Sep 28, 2023
Examiner
RIEGLER, PATRICK F
Art Unit
2171
Tech Center
2100 — Computer Architecture & Software
Assignee
Apple Inc.
OA Round
3 (Non-Final)
55%
Grant Probability
Moderate
3-4
OA Rounds
4y 5m
To Grant
89%
With Interview

Examiner Intelligence

Grants 55% of resolved cases
55%
Career Allow Rate
189 granted / 346 resolved
At TC average
Strong +35% interview lift
Without
With
+34.6%
Interview Lift
resolved cases with interview
Typical timeline
4y 5m
Avg Prosecution
36 currently pending
Career history
382
Total Applications
across all art units

Statute-Specific Performance

§101
8.7%
-31.3% vs TC avg
§103
51.9%
+11.9% vs TC avg
§102
14.5%
-25.5% vs TC avg
§112
18.2%
-21.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 346 resolved cases

Office Action

§103 §112
DETAILED ACTION This Non-Final communication is in response to Application No. 18/374,657 filed 9/28/2023 which claims priority from Provisional Application No. 63/465,220 filed 5/9/2023. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The Request for Continued Examination and Amendment presented on 3/3/2026 which provides amendment to claims 1, 2, 5, 7-12, 15, and 17-20, cancellation of claims 6 and 16, and new claims 21 and 22, is hereby acknowledged. Claims 1-5, 7-15, and 17-22 are currently pending. Claim Rejections – Withdrawn The previous 35 U.S.C §101 rejection of claims 1-20 are withdrawn as necessitated by amendment. Response to Arguments The amendments to the claims pertaining to metadata having information relevant to a user and auto-filling fields necessitated an updated search and consideration. However, Applicant's arguments with respect to Mehra disclosing "displaying the structured form data as an electronically fillable form based on the metadata" have been considered, however, they are not persuasive. Indeed, as admitted by Applicant, Mehra discloses generating "a fillable form using the fillable region data" and "fillable regions for accepting input". However, Mehra also implies the purpose of metadata is to “identif[y] a region as something to be filled in and/or that identifies the type of input data the region should accept (Mehra, [0014]).” Additionally, Mehra discloses that metadata is part of fillable region data (Mehra, [0036]). If the fillable form is based on fillable region data which in turn includes metadata, then Mehra can be interpreted as displaying the fillable form based on at least some metadata. Therefore, Mehra is maintained as disclosing displaying the structured form data as an electronically fillable form based on the metadata. See the rejections below. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-5, 7-15, and 17-22 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the enablement requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to enable one skilled in the art to which it pertains, or with which it is most nearly connected, to make and/or use the invention. Regarding claim 1 (and similarly in claims 11 and 20), at least “displaying the structured form data as an electronically fillable form based on the metadata, with an indication of the identified one or more fields, and wherein at least one of the identified one or more fields is auto filled with the information relevant to the user” does not appear enabled by the specification. Specifically, the claims, and specification, appear to be making too great a leap from (1), generating metadata defining fields of a form to (2), displaying a structured form as electronically fillable. The specification indicates the existence of an application that can perform the process of Figure 5 “by calling APIs provided by the operating system and/or another application on the electronic device 102” or “without making any API calls to the operating system of the electronic device (Specification at [0043]).” However, going from metadata to editable form fields would require undo experimentation, to create said applications or APIs, from one of ordinary skill in the art. The specification at [0058] goes on to state “[t]he metadata may be embedded into a file in association with the form data (e.g., the image of the form) to provide the form with structure thereby allowing the form to be filled or otherwise manipulated in a digital environment, such as across different applications and/or across different electronic devices.” This portion seems to imply that simply including metadata with an image of the form allows any application across any device to display an electronically fillable form, which one of ordinary skill in the art would understand is not the case. The specification also makes reference to portable document format (PDF), but describes it as “lack[ing] digital structure and therefore may not include editable fields that can be filled electronically (specification at [0012])”. The specification then asserts “structured form data may include a fillable PDF in which a user may input data into defined, fillable fields of form data (specification at [0025])”. However, again, explanation of how the metadata is used to create the fillable PDF is missing from the specification. Attached is a publication from Adobe describing metadata of PDF files as recent as September 23, 2025 (https://helpx.adobe.com/acrobat/desktop/edit-documents/edit-pdf-properties/pdf-properties.html). At most, Metadata of a PDF is described as including items such as creation date, modification date, basic information (title, author, subject, keywords), Security settings, Font information, Initial view settings, Custom properties, Description, and Initial View. Further described is “editing metadata doesn't alter the document's content.” Therefore, even according to Adobe, PDF metadata does not appear to be useable to create a fillable PDF. In summary, the specification does not appear to disclose enough of an application or algorithm that is capable of interpreting the generated metadata and creating a structured form with interactive user interface fields without undo experimentation. The Examiner submits the disclosure does not enable one of ordinary skill in the art to make and use the invention including displaying the structured form data as an electronically fillable form based on the metadata, with an indication of the identified one or more fields, and wherein at least one of the identified one or more fields is auto filled with the information relevant to the user, without undo experimentation. Dependent claims not mentioned inherit the deficiencies of their parent claims. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 2, 12, and 17 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claim 2 (and similarly with claims 12 and 17), “the form data” lacks antecedent basis in the claims. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-5, 7-15, and 17-21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mehra et al. (US 2023/0230404 A1, filed 01/18/2022, hereafter referred to as “Mehra”), and further in view of Vohra et al. (US 2015/0317296 A1, hereinafter “Vohra”). Regarding claim 1, Mehra teaches a method comprising: obtaining unstructured form data corresponding to a form, wherein the unstructured form data comprises one or more lines of text. More specifically, at block 902, a form is obtained (e.g., a digital form, construed as unstructured) (Mehra, [0105]). At block 904, when identifying a candidate fillable region, a visual machine learning model, or object detector, can analyze image features, such as a raw image (e.g., RGB image) and a linguistic image (e.g., image that represents text within a form) (Mehra, [0106]). At block 906, textual context indicating text from the form (“one or more lines of text of the form”) and spatial context indicating positions of the text within the form are obtained (Mehra, [0107]). determining one or more text attributes of the one or more lines of text of the unstructured form data. More specifically, textual context (“text attributes”) generally refers to text in a form, and spatial context generally refers to positions or locations of text within the form (e.g., as identified via a bounding box or coordinates). Such textual context and spatial context can be identified or derived from the form (Mehra, [0107]). identifying one or more fields of the unstructured form data based at least in part on the determined one or more text attributes. More specifically, at block 908, fillable region data associated with the candidate fillable region are generated, via a machine learning model, using the candidate fillable region, the textual context, and the spatial context. Advantageously, using the textual and spatial context in relation to the candidate fillable region provides a more accurate prediction of fillable region data (Mehra, [0108]). transforming the unstructured form data into a structured form data representing the form by structuring the unstructured form data with the identified one or more fields, the structured form data enabling electronic interaction and computer-executed manipulation of the unstructured form data through the identified one or more fields, wherein the structured form data comprises metadata having information relevant to a user; More specifically, Figures 8 and 9 depict the process starting with a form and generating a “fillable form” structured with “fillable region data”, construed as a structured data form (Mehra, [0104]-[0109]). Mehra explicitly states that the problem with prior art is that “there are many situations where there is no associated metadata that identifies a region as something to be filled in and/or that identifies the type of input data the region should accept” (Mehra, [0014]). The fillable form is generated with fillable region data which comprises metadata (Mehra, [0036], [0101]). displaying the structured form data as an electronically fillable form based on the metadata, with an indication of the identified one or more fields, …. More specifically, at block 910, a fillable form having one or more fillable regions for accepting input is automatically generated using the fillable region data which comprises metadata that identifies a region as something to be filled in and/or that identifies the type of input data the region should accept. In this regard, fillable regions can be created in a form for presenting to a user to input data into the fillable regions (Mehra, [0014], [0036], [0101], [0109]). However, Mehra may not explicitly teach every aspect of and wherein at least one of the identified one or more fields is auto filled with the information relevant to the user. Vohra discloses detecting, validating, and correlating form-fields in a scanned document. The method comprises displaying a plurality of interactive form-fields associated with a scanned document, wherein each interactive form-field in the plurality of form-fields is defined by a location in the document where one or more previous users entered information on the scanned document, and a data type for the entered information (Vohra, abstract). Metadata is stored with a document that includes at least a field table that defines at least coordinates for a bounding box of each entered field, a user profile characteristic, and data type of the data entered for each field (Vohra, [0027], [0039], Figure 1). When a document is requested for filling, this metadata is used with a current user’s account to auto-populate fields on the form (Vohra, [0033], [0052]-[0053]). It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention given the teachings of Mehra and Vohra that a method for transforming an unstructured form to a structured/fillable form using detected metadata about text and fields would include wherein at least one of the identified one or more fields is auto filled with the information relevant to the user. With Mehra and Vohra both disclosing presenting a fillable form using metadata with a scanned form, and with Vohra additionally disclosing using the metadata and user profile information to auto-populate fields of the fillable form, one of ordinary skill in the art of implementing a method for transforming an unstructured form to a structured/fillable form using detected metadata about text and fields would include wherein at least one of the identified one or more fields is auto filled with the information relevant to the user in order to make filling out the form as easy and quick as possible. One would therefore be motivated to combine these teachings as in doing so would create this method for transforming an unstructured form to a structured/fillable form. Regarding claim 2, Mehra and Vohra teach the method of claim 1, wherein the form data is obtained from at least one of an image or a scanned portable document format. More specifically, for an electronic form or document, fillable regions and data associated therewith can be identified and/or generated in an accurate and efficient manner (Mehra, [0002])). In some cases, an initiator will start with a paper document, digitize (e.g., scan, apply optical character recognition (OCR)) the paper document (Mehra, [0014]). In some cases the form is a PDF (Mehra, [0044]-[0045]) Regarding claim 3, Mehra and Vohra teach the method of claim 1, wherein the indication comprises a respective bounding box displayed around each of the identified one or more fields. More specifically, the fillable region identifier (e.g., via language model, such as a layout language model) may identify or generate visual features (e.g., using the bounding boxes of the candidate region(s) and/or words) (Mehra, [0068]). (See also Vohra, [0110], Figures 11B-11C). Regarding claim 4, Mehra and Vohra teach the method of claim 1, wherein identifying the one or more fields comprises: generating a set of training data, wherein a training instance of the set of training data comprises a training text, a training field, and a training label including training text attributes; training an object detection model using the set of training data; and identifying, by the object detection model, the one or more fields. More specifically, data stored in data store 230 includes training data 232. Training data generally refers to data used to train a machine learning model, or portion thereof. As such, training data 262 can include images, image features, candidate fillable regions, textual context, spatial context, candidate region features, type of candidate region, and/or the like (Mehra, [0041]). Regarding claim 5, Mehra and Vohra teach the method of claim 1, wherein the one or more text attributes comprise semantic information, wherein the semantic information includes one or more of punctuation, symbols, capitalization, or subject matter. More specifically, textual context includes a specific type/subject such as a phone number, an address, or a name (Mehra, [0033], [0075], “Patient” and “diagnosis”). (See also Vohra, references to field or data “type”). Regarding claim 7, Mehra and Vohra teach the method of claim 1, further comprising: providing the unstructured form data to an application or a system process. More specifically, certain form filling or e-signature applications allow users to create fillable forms or digital documents (e.g., contracts) to be signed and/or otherwise filled with information (Mehra, [0014]). (See also Vohra, “e-reader application”, [0060], [0070]). Regarding claim 8, Mehra and Vohra teach the method of claim 1, wherein the metadata is associated with at least one of the identified one or more fields, wherein the metadata further comprises one or more of a location in the structured form data, a font, a name, or an input type. More specifically, metadata describes fields of a form (Mehra, [0014]) including input types, positions or placements (Mehra, [0036], [0101]). (See also Vohra, using location and type information, in field table of metadata, to present the fillable form fields, [0026]). Regarding claim 9, Mehra and Vohra teach the method of claim 1, further comprising: obtaining input data based on the metadata being associated with at least one of the identified one or more fields; and filling the at least one of the identified one or more fields with the input data. More specifically, metadata describes fields of a form (Mehra, [0014]) including input types, positions or placements (Mehra, [0036], [0101]). If types of input include checkboxes, radio-button groups, signatures, numeric, date, and text (Mehra, [0001], [0015], [0030], [0053], [0057]), then input into the fields will be according to the metadata. Regarding claim 10, Mehra and Vohra teach the method of claim 1, further comprising: generating the metadata corresponding to the identified one or more fields; and storing the metadata in association with the structured form data. More specifically, metadata describes fields of a form (Mehra, [0014]) including input types, positions or placements (Mehra, [0036], [0101]). The Object Detector stores the identified fields and their input type (Mehra, [0018], [0040]-[0041], [0055]). (See also Vohra, Metadata is stored with a document that includes at least a field table that defines at least coordinates for a bounding box of each entered field, a user profile characteristic, and data type of the data entered for each field (Vohra, [0027], [0039], Figure 1). Regarding claims 11-15 and 17-19 these claims recite the electronic device that performs the steps of the method of claims 1-5 and 8-9, respectively, therefore, the same rationale of rejection is applicable. Regarding claim 20, this claim recites a non-transitory computer-readable medium with instructions for performing the steps of the method of claim 1, therefore, the same rationale of rejection is applicable. Regarding claim 21, Mehra and Vohra teach the method of claim 1, wherein the unstructured form data represents the form and lacks digitally defined and editable fields. More specifically, the original scanned document lacks digitally defined and editable fields (Mehra, [0014]; Vohra, [0022]). Claim(s) 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mehra and Vohra, and further in view of Yu et al. (US 7,840,891 B1, hereinafter “Yu”). Regarding claim 22, Mehra and Vohra teach the method of claim 1, wherein the one or more text attributes comprise geometric information. More specifically, textual context (“text attributes”) generally refers to text in a form, and spatial context generally refers to positions or locations of text within the form (e.g., as identified via a bounding box or coordinates). Such textual context and spatial context can be identified or derived from the form (Mehra, [0107]). (see also Vohra, location information, [0027], [0028], [0032]). However, Mehra and Vohra may not explicitly teach every aspect of the geometric information indicating a spatial orientation between the one or more lines of text of the form and the one or more fields of the form, and wherein the geometric information includes one or more of a line starting location, a line height, a line length, or a line spacing. Yu discloses a method for presenting content in a form including extracting, from an open format for rendering the form, a plurality of elements on the form, identifying a plurality of contextual relationships between the plurality of elements based on their attributes, generating a representation of the form, wherein the representation that includes information about the plurality of contextual relationships, and presenting the content in the form based on the representation (Yu, abstract). Embodiments of the invention identify the text and editable field that is contextually related to the text and relate them. Using the connected context, presentation tools that assist the user in inputting data into the form may be developed, differences between generations of the form may be accounted for, and the form may be automatically reformatted (construed as restructured) so as to be more user-friendly (Yu, col 2, line 64 – col 3, line 3). Rules in the rule base specify types of elements that are related (Yu, col 4, lines 66-67). Certain rules may be considered both inclusion and exclusion rules depending on how the rule is specified. For example, an inclusion rule may state that if the difference between the location of two elements (i.e., vertical distance, horizontal distance, or diagonal distance) is within a pre-defined threshold, then the elements may be identified as related. Information indicating elements are related if they are “within a threshold distance on the same line” (e.g., text is determined to be “vertically in line with corresponding fields”) implies that at least the text line starting location, a line height, a line length, or a line spacing is determined as part of spatial orientation (Yu, col 5, lines 24-30; lines 50-60; col 12, line 59 - col 13, line 10;). Therefore, Yu analyzes a form to identify which text and which fields are related according to spatial orientation which implies determining text line starting location, a line height, a line length, or a line spacing. It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention given the teachings of Mehra and Vohra with Yu that a method for transforming/restructuring a form using geometric information of elements would include the geometric information indicating a spatial orientation between the one or more lines of text of the form and the one or more fields of the form, and wherein the geometric information includes one or more of a line starting location, a line height, a line length, or a line spacing. With Mehra and Vohra with Yu disclosing transforming/restructuring a form using geometric information of elements, and with Yu additionally disclosing the determination of which text and field elements are related based on spatial orientation including distance on the same line or being vertically in line, construable is indications of at least one of line starting location, a line height, a line length, or a line spacing, one of ordinary skill in the art of implementing a method for transforming/restructuring a form using geometric information of elements would include the geometric information indicating a spatial orientation between the one or more lines of text of the form and the one or more fields of the form, and wherein the geometric information includes one or more of a line starting location, a line height, a line length, or a line spacing in order to present to the user the most accurate instructions for entering data in the fields as possible. One would therefore be motivated to combine these teachings as in doing so would create this method for transforming/restructuring a form using geometric information of elements. Pertinent Prior Art The prior art made of record on form PTO-892 and not relied upon is considered pertinent to applicant's disclosure. Applicant is required under 37 C.F.R. § 1.111(c) to consider these references fully when responding to this action. Sarkar (US 2019/0294661 A1) – identifying and displaying fillable fields in a form. Vohra (US 2015/0317296 A1) – identifying and displaying fillable fields in a form. Shetty (US 2017/0075873 A1) – determining fields of a form as metadata and generating and displaying suggestions for fillable fields. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to PATRICK F RIEGLER whose telephone number is (571)270-3625. The examiner can normally be reached M-F 9:30am-6:00pm, ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kieu Vu can be reached at (571) 272-4057. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PATRICK F RIEGLER/ Primary Examiner, Art Unit 2171
Read full office action

Prosecution Timeline

Sep 28, 2023
Application Filed
Jul 03, 2025
Non-Final Rejection — §103, §112
Oct 01, 2025
Interview Requested
Oct 08, 2025
Applicant Interview (Telephonic)
Oct 08, 2025
Response Filed
Oct 08, 2025
Examiner Interview Summary
Dec 02, 2025
Final Rejection — §103, §112
Jan 28, 2026
Interview Requested
Feb 03, 2026
Applicant Interview (Telephonic)
Feb 03, 2026
Examiner Interview Summary
Feb 04, 2026
Response after Non-Final Action
Mar 03, 2026
Request for Continued Examination
Mar 12, 2026
Response after Non-Final Action
Mar 13, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12547824
USER INTERFACE DATA ANALYZER HIGHLIGHTER
2y 5m to grant Granted Feb 10, 2026
Patent 12542869
Video Conference Apparatus, Video Conference Method and Computer Program Using a Spatial Virtual Reality Environment
2y 5m to grant Granted Feb 03, 2026
Patent 12535935
SYSTEMS AND METHODS FOR ANNOTATION PANELS
2y 5m to grant Granted Jan 27, 2026
Patent 12505140
AN INFORMATION INTERACTION VIA A MULTIMEDIA CONFERENCE
2y 5m to grant Granted Dec 23, 2025
Patent 12500984
NOTIFICATION SYSTEM NOTIFYING USER OF MESSAGE, CONTROL METHOD THEREFOR, AND STORAGE MEDIUM STORING CONTROL PROGRAM THEREFOR
2y 5m to grant Granted Dec 16, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
55%
Grant Probability
89%
With Interview (+34.6%)
4y 5m
Median Time to Grant
High
PTA Risk
Based on 346 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month