Prosecution Insights
Last updated: April 19, 2026
Application No. 17/723,129

DOCKETING TRANSLATION TOOL

Final Rejection §103
Filed
Apr 18, 2022
Examiner
MCLEAN, IAN SCOTT
Art Unit
2654
Tech Center
2600 — Communications
Assignee
Black Hills Ip Holdings LLC
OA Round
6 (Final)
43%
Grant Probability
Moderate
7-8
OA Rounds
3y 2m
To Grant
74%
With Interview

Examiner Intelligence

Grants 43% of resolved cases
43%
Career Allow Rate
19 granted / 44 resolved
-18.8% vs TC avg
Strong +31% interview lift
Without
With
+31.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
40 currently pending
Career history
84
Total Applications
across all art units

Statute-Specific Performance

§101
9.9%
-30.1% vs TC avg
§103
60.0%
+20.0% vs TC avg
§102
27.2%
-12.8% vs TC avg
§112
2.1%
-37.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 44 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments 2. Applicant's arguments filed 1/20/2026 have been fully considered but they are not persuasive. Applicant argues in particular that Li fails to disclose the amended limitations, including adding a text layer to an image document and determining positional coordinates, and further asserts that Englund and Vukosavljevic were not relied upon for these limitations. Applicant additionally contends that Li is directed to HTML or web based content rather than image documents and therefore does not teach the claimed auxiliary annotation system. These arguments are not persuasive. First, Applicant’s arguments directed to Li are not commensurate with the scope of the rejection as modified in view of the amended claims. The rejection does not rely on Li to teach the limitations relating to determining positional coordinates, mapping text sections to locations within an image, or adding a text layer including such positional information. Rather, these limitations are taught by Vukosavljevic. Specifically, Vukosavljevic discloses analyzing recognized text regions within an image using bounding box computation (see Vukosavljevic ¶[0027]), which inherently determines positional coordinates corresponding to text regions within the image. Such bounding boxes map text to specific locations in the image document, therefore meeting the claimed set of positional coordinates and mapping the text sections to locations on the image document. Further, Vukosavljevic discloses selecting and processing these identified regions, including overlaying translated text in alignment with the original image content (see Vukosavljevic ¶[0025], ¶[0027]). This overlay and alignment of recognized text regions constitutes a representation of text associated with positional information, which is reasonably interpreted as a text layer including text in a searchable format and corresponding positional coordinates. Accordingly, the combination of Englund, Li and Vukosavljevic teaches the amended limitations relating to positional coordinates and text layer generation, while Li specifically continues to provide teachings regarding document structure and classification of text regions (e.g., commentary and title sections). Applicant’s argument that Li operates on HTML content does not negate its teaching of structural text classification, which is applicable to recognized text regardless of the source format. Finally, it would have been obvious to one of ordinary skill in the art to utilize positional information (e.g., bounding boxes) associated with recognized text regions in a structured text layer format to facilitate searchability, alignment and downstream processing of translated content. Such techniques are well known in document processing and OCR systems, and allows for smoother utilization of image processing tools. Vukosavljevic explicitly discloses this motivation in ¶[0002]: “enabling the more sophisticated utilization of device capabilities such as the camera which when employed can assist users in making decisions and completing tasks across disparate language barriers.” Therefore, the rejection of claims 1-2, 5-7, 9-10, 12-19 maintained. Claim Rejections - 35 USC § 103 3. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 4. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 5. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 6. Claims 1-2, 6-7, 12-16 and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Englund (US 8,144,990), in view of Li (US 2012/0095748) and further in view of Vukosavljevic (US 2014/0172408). Regarding Claim 1: Englund a method of docketing an image document, the docketing method comprising: receive a docketing item comprising an image document containing text in a foreign language (Englund: Col 1 lines 23-26 receives an image that includes text); identify the foreign language in the image document (Englund: Col 2 lines 4-5 identify first and second languages) by review scan the image document (Englund: Col 2 lines 4-5 reviews and analyzes the image document) ; presenting, on a user interface, a translated version of the image document (Englund: Col 1 line 63– Col 2 line 3 provides translation via display means of the user system) reporting the docketing item to the automated or semi-automated docketing system with a copy of the image document and the translated version of the image document in the first language (Englund: Fig. 5 step 540 displays foreign language i.e., original language and the translated text). Englund does not explicitly disclose: using an auxiliary annotation system recognize an image document type of the image document, the image document type associated with a document type text structure and apply structured text recognizable to an automated or semi-automated docketing system to the added text layer, the structured text indicating at least one commentary section and at least one title section from among the at least two sections of text in the foreign language based on the document type text structure However, Li discloses: using an auxiliary annotation system (Li: p[0031]-[0033] discloses a section breaking and classification component that analyzes layout and assigns weight/regions (headers, titles, commentary, footers), effectively acting as an auxiliary annotation system to identify document structure); recognize an image document type of the image document, the image document type associated with a document type text structure (Li: p[0031], ¶[0033] and ¶[0039] discloses analyzing a documents layout to recognize its type and the internal text structure, such as headers, titles footers and main-body regions), and apply structured text recognizable to an automated or semi-automated docketing system to the added text layer, the structured text indicating at least one commentary section and at least one title section from among the at least two sections of text in the foreign language based on the document type text structure (Li: p[0031] discloses identifying header footer, p[0033] and p[0039] discloses a classification component for classifying document sections, user commentary, header left side bar and main block, altogether discloses applying structured metadata). Englund and Li are combinable because they are from the same field of endeavor, machine translation. It would have been obvious to one of ordinary skill in the art before the effective filing date to integrate Li’s structured display of region-specific text in a multi-lingual/foreign language system with Englund’s overlay presentation so that translated commentary and titles appear in their defined positions, thereby enhancing readability and alignment automated docketing interfaces. Both are from the same field of endeavor of document analysis, e.g., discloses systems for getting docketed images or other types of documents containing text and identifying primary language of the text and/or translating it. The suggestion/motivation for doing so is “Some existing systems are designed to output a list of languages ranked by confidence in addition to the primary language, but they may not be able to specify which of the languages are actually present in a document. These limitations lower the effectiveness of language detection for multilingual documents, because they may cause incorrect word-breaking A word-breaker identifies individual words for a given language by determining where word boundaries exist based on the linguistic rules of the language” as disclosed by Li in the background of invention. Englund and Li do not explicitly disclose: to identify at least two sections of text in the foreign language within the image document and determine a set of positional coordinates for each of the at least two sections of text in the foreign language, wherein the set of positional coordinates maps the at least two sections of text in the foreign language to a location on the image document, add a text layer to the image document, the text layer including the at least two sections of text in the foreign language in a searchable format and the set of positional coordinates for each of the at least two sections of text in the foreign language, and apply structured text recognizable to an automated or semi-automated docketing system to the added text layer, the structured text indicating at least one commentary section and at least one title section from among the at least two sections of text in the foreign language based on the document type text structure; translating the searchable format of the at least one commentary section to a first language with a machine translator to create at least one translated commentary section; including the at least one translated commentary section embedded in the translated version of the image document based on the set of positional coordinates. However, Vukosavljevic discloses: to identify at least two sections of text in the foreign language within the image document and determine a set of positional coordinates for each of the at least two sections of text in the foreign language, wherein the set of positional coordinates maps the at least two sections of text in the foreign language to a location on the image document (Vukosavljevic: ¶[0027] discloses bounding box computation and analysis of recognized text regions producing selectable portions, wherein the recognized text regions correspond to identified sections of text and the bounding boxes define spatial extents of those regions within the image, therefore providing positional coordinate information that maps each section to a specific location in the image document), add a text layer to the image document, the text layer including the at least two sections of text in the foreign language in a searchable format and the set of positional coordinates for each of the at least two sections of text in the foreign language, and apply structured text recognizable to an automated or semi-automated docketing system to the added text layer, the structured text indicating at least one commentary section and at least one title section from among the at least two sections of text in the foreign language based on the document type text structure (Vukosavljevic ¶[0025] discloses an overlay algorithm which overlays the text of the second language on the text of the first language as viewed in a display and ¶[0027] again discloses bounding box computation and producing selectable portions, the bounding boxes are positional coordinate data, the selectable regions are structured text regions and the overlaying translated text aligned with original positions is text associated with coordinates); translating the searchable format of the at least one commentary section to a first language with a machine translator to create at least one translated commentary section (Vukosavljevic: ¶[0027] teaches selective region based translation, i.e., searchable, the created section could be any kind of section, header, footer, commentary etc.); including the at least one translated commentary section embedded in the translated version of the image document based on the set of positional coordinates (Vukosavljevic: ¶[0027] identifies text regions and associates them with bounding boxes and performs translation on those regions. Wherein the translation is performed on selected text regions and the resulting translated text is positioned in correspondence with the bounding boxes of the original regions, therefor embedding the translated content at locations defined by the positional coordinates)). Englund, Li and Vukosavljevic are combinable because they are all from the same field of endeavor, machine translation. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize positional information (e.g., bounding boxes as taught in Vukosavljevic) in conjunction with recognized text to facilitate structured processing of document content. Specifically, once text regions are identified within an image and associated with positional coordinates, it would have been predictable to implement maintaining such text and coordinate information in a structured representation (i.e., a text layer) to enable searchability, alignments and downstream processing such as translation and classification. Further applying structured classification to such recognized text regions would have been obvious as Li’s classification operates on text segments regardless of the source format of the text. Combining Li’s classification with Vukosavljevic’s coordinate based text region identification yields an obvious result, as the translated bounded regions of Vukosavljevic are already commentary, headings, titles, etc., it merely does not explicitly note the identification of such regions, The suggestion/motivation for combining these references is “In one robust implementation, the overlay of the translated text can be made word-for-word on the original text. Put another way, if the original text is three words, and the translation is three words, and each translated word corresponds to a single original text word, the visual presentation is that the first word of the translated text overlays the corresponding first word of the original text, the second word of the translated text overlays the corresponding second word of the original text, and so on. Thus, there can be made a direct correlation between a word or words of the translated text to the equivalent word or words of original text (e.g., words, phrases, etc.) of the different languages” as disclosed in ¶[0005] of Vukosavljevic’s. Regarding Claim 2: The proposed combination of Englund, Li and Vukosavljevic further disclose the method of claim 1, wherein identifying the foreign language comprises using a data capture and natural language processing system (Englund: Col 5 line 47- Col 6 line 53 image is captured, additionally the machine uses Optical Character Recognition (OCR) to process the image in order to produce text which me then be translated using a translation memory system). Regarding Claim 6: The proposed combination of Englund, Li and Vukosavljevic further disclose the method of claim 1, wherein the structured text indicating at least one commentary section and at least one title section from among the at least two sections of text in the foreign language based on the document type text includes highlighting one or more sections in the added text layer on the document type text structure (Vukosavljevic: p[0029] explicitly discloses highlighting, italicization of specific sections for translated sections; Englund: Col 7 line 47-Col 9 line 2 text may be highlighted withing the OCR’d image for text and translation purposes). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to combine Englund’s highlighting of OCR’d image translation with Vukosavljevic’s marked section translation method which also includes highlighting and marking specifically for predetermined marked off section. Each reference (Englund, Li and Vukosavljevic are within the same field of endeavor of document analysis, e.g., discloses systems for getting docketed images or other types of documents containing text and identifying primary language of the text and/or translating it. This system would be simple to integrate and would not require any undue experimentation and produce predictable results. The motivation is simple, providing the user with a clear visualization of translated portions of ease of use and experience. Regarding Claim 7: The proposed combination of Englund, Li and Vukosavljevic further disclose the method of claim 1, wherein translating the searchable format of the at least one commentary section further comprises feeding the searchable format of the at least one commentary section into a machine translation program and producing and English version of the searchable format of the at least one commentary section of the text in the foreign language (Englund: Col 4 lines 25-42 translation module 290; Fig. 6B and p(49) may produce English versions of the selection). It is noted that although Englund does not mention ‘commentary’ text, the combination between Englund, Li and Vukosavljevic discloses that commentary text could be a factor of input as and because this is the case, this commentary could certainly go through the translation modules of Englund. Regarding Claim 12: The proposed combination of Englund, Li and Vukosavljevic further discloses method of claim 1, wherein translating the searchable format of the at least one commentary section comprises translating the at least two sections of text in the foreign language within the image document. (Li: p[0032]-[0034] discloses recognizing and sectioning the entire document). It would have been obvious to one of ordinary skill in the art before the effective filing date to integrate Li’s structured display of region-specific text in a multi-lingual/foreign language system with Englund’s overlay presentation so that translated commentary and titles appear in their defined positions, thereby enhancing readability and alignment automated docketing interfaces. Both are from the same field of endeavor of document analysis, e.g., discloses systems for getting docketed images or other types of documents containing text and identifying primary language of the text and/or translating it. Li explicitly discloses performing these techniques on an entire document which Englund’s system could easily be made to do. Regarding Claim 13: The proposed combination of Englund, Li and Vukosavljevic further disclose the method of claim 1, further comprising docketing the translated version of the image document in the first language (Englund: Col 9 lines 11-42 after the translation is performed it may be stored, it may also be output). Regarding Claim 14: Claim 13 has been analyzed with regards to claim 1 (see rejection above) and is rejected for the same reasons of obviousness above. Regarding Claim 15: The proposed combination of Englund, Li and Vukosavljevic further disclose the system of claim 14, further comprising an electronic communication system for receiving the one or more electronic communications and directing the associated file to the intake tool (Englund: Fig. 4 440 discloses a communication interface and a user interface 430 which is capable of supplying user input to the translation memory 460 explained in Col 1 lines 27-32). Regarding Claim 16: The proposed combination of Englund, Li and Vukosavljevic further disclose the system of claim 14, further comprising a file database, wherein the intake tool is configured to scrape the one or more electronic communications from the file database (Englund: Fig. 2 processor 2020 and translation module 290 are capable of accessing storage device 250 which may contain stored files). Regarding Claim 18: The proposed combination of Englund, Li and Vukosavljevic further disclose the system of claim 14, wherein the docketing system is automated (Englund: Figs. 1-4 is interpreted as a docketing system and are all automated computer systems). Regarding Claim 19: The proposed combination of Englund, Li and Vukosavljevic, explained in the rejection of the method of claim 1 renders obvious the steps of the computer readable medium of claim 19 because these steps occur in the operation of the prior art as discussed above. Thus, the arguments similar to that presented above for claim 1 are equally applicable to claim 19. Note that Englund discloses a computer readable medium at least at Fig. 2. 7. Claims 5 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Englund in view of Li, further in view of Vukosavljevic and further in view of Goswami (US 2010/0125447). Regarding Claim 5: The method of claim 1, wherein identifying the foreign language comprises identifying multiple languages in the image document and selecting which language to translate. However, Goswami discloses identifying the foreign language comprises identifying multiple languages in the image document and selecting which language to translate (Goswami: p[0022] each language M in the candidate languages is examined and it determines which portions of a document are in which specific language(s) and chooses each one’s separate translation language). Englund, Li and Vukosavljevic in view of Goswami are combinable because they are from the same field of endeavor of document and language processing; e.g., all disclose methods of scanning text and determining a language, meaning or attempting to provide clarity. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to disclose identifying the foreign language comprises identifying multiple languages in the document and selecting which language to translate because a document may commonly contain more than one language, the process of translating or identifying would not change in Englund besides the idea of multiple languages being identified taught in Goswami. Regarding Claim 9: The proposed combination of Englund, Li and Vukosavljevic further disclose further discloses the method of claim 1, except further comprising sending the translated version of the image document to an attorney or other professional for review. However, Goswami discloses sending the translated version of the image document to an attorney or other professional for review (Goswami: p[0105] involves review by a person and automated analysis). Englund, Li and Vukosavljevic in view of Goswami are combinable because they are from the same field of endeavor of document and language processing; e.g., all disclose methods of scanning text and determining a language, meaning or attempting to provide clarity. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to disclose providing a translated version comprises sending the translated version of the document to an attorney or other professional for review. The suggestion/motivation for doing so is “discovery in civil litigation usually involves the production of massive quantities of electronic documents that the receiving party must sift through” as disclosed in p[0003] of Goswami. 8. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Englund in view of Li, further in view of Vukosavljevic and further in view of Gelosi (US 11,475,209). Regarding Claim 10: The proposed combination of Englund, Li and Vukosavljevic further discloses the method of claim 1, except wherein the image document comprises an office action, search report, letter, notice of allowance, brief, or another legal document. However, Gelosi discloses wherein the image document comprises an office action, search report, letter, notice of allowance, brief, or another legal document (Gelosi: Col 9 line 62- Col 11 line 5 the documents that are input may be a legal document). Englund, Li and Vukosavljevic in view of Gelosi are combinable because they are from the same field of endeavor of document and language processing; e.g., all disclose methods of scanning text and determining useful output. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to discloses wherein the document comprises an office action, search report, letter, notice of allowance, brief, or another legal document. The suggestion/motivation for doing so is that a legal document and any other type of document falling under that umbrella are not only commonly complex and need software to assist, but also are capable of being translated by Englund. Therefore, it would have been obvious to combine these references with Gelosi to obtain the invention as specified in claim 10. 9. Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Englund over Englund in view of Li, further in view of Vukosavljevic and further in view of Chan (WO 2004/044741). Regarding Claim 17: The proposed combination of Englund, Li and Vukosavljevic further discloses the system of claim 14, except further comprising file records, wherein the automated or semi-automated docketing system is configured to communicate with and update the file records. However, Chan discloses comprising file records, wherein the automated or semi-automated docketing system is configured to communicate with and update the file records. (Chan: page 8 lines 15-20 discloses that the machine translation may be updated within a database). Englund, Goodman and King in view of Chan are combinable because they are from the same field of endeavor of document and language processing; e.g., all disclose methods of scanning text and determining useful output. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to discloses file records, wherein the docketing system is configured to communicate with and update the file records. The suggestion/motivation for doing so is “it is quite expensive to hire professionals to translate the web pages and their updates into different languages. For a large web site with hundreds even thousands of pages of documents, the project of translation is huge. Second, because the translation takes time, the multilingual versions cannot be updated in a timely manner.” Therefore, it would have been obvious to combine Englund and goodman with Chan to obtain the invention as specified in claim 17. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to IAN SCOTT MCLEAN whose telephone number is (703)756-4599. The examiner can normally be reached "Monday - Friday 8:00-5:00 EST, off Every 2nd Friday". Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hai Phan can be reached at (571) 272-6338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /IAN SCOTT MCLEAN/Examiner, Art Unit 2654 /HAI PHAN/Supervisory Patent Examiner, Art Unit 2654
Read full office action

Prosecution Timeline

Apr 18, 2022
Application Filed
Mar 20, 2024
Non-Final Rejection — §103
Jul 01, 2024
Response Filed
Sep 17, 2024
Final Rejection — §103
Nov 19, 2024
Response after Non-Final Action
Dec 03, 2024
Response after Non-Final Action
Dec 09, 2024
Request for Continued Examination
Dec 12, 2024
Response after Non-Final Action
Feb 13, 2025
Non-Final Rejection — §103
May 27, 2025
Response Filed
Jun 16, 2025
Final Rejection — §103
Sep 22, 2025
Request for Continued Examination
Sep 23, 2025
Response after Non-Final Action
Oct 14, 2025
Non-Final Rejection — §103
Jan 20, 2026
Response Filed
Mar 31, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602553
SPEECH TRANSLATION METHOD, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12494199
VOICE INTERACTION METHOD AND ELECTRONIC DEVICE
2y 5m to grant Granted Dec 09, 2025
Patent 12443805
Systems and Methods for Multilingual Data Processing and Arrangement on a Multilingual User Interface
2y 5m to grant Granted Oct 14, 2025
Patent 12437144
Content Recommendation Method and User Terminal
2y 5m to grant Granted Oct 07, 2025
Patent 12400644
DYNAMIC LANGUAGE MODEL UPDATES WITH BOOSTING
2y 5m to grant Granted Aug 26, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
43%
Grant Probability
74%
With Interview (+31.0%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 44 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month