Prosecution Insights
Last updated: April 19, 2026
Application No. 17/849,271

FONT RECOMMENDATION, TOPIC EXTRACTION, AND TRAINING DATA GENERATION

Final Rejection §103§112
Filed
Jun 24, 2022
Examiner
TSUI, WILSON W
Art Unit
2172
Tech Center
2100 — Computer Architecture & Software
Assignee
Adobe Inc.
OA Round
4 (Final)
62%
Grant Probability
Moderate
5-6
OA Rounds
4y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
365 granted / 593 resolved
+6.6% vs TC avg
Strong +58% interview lift
Without
With
+58.1%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
44 currently pending
Career history
637
Total Applications
across all art units

Statute-Specific Performance

§101
15.5%
-24.5% vs TC avg
§103
52.5%
+12.5% vs TC avg
§102
12.0%
-28.0% vs TC avg
§112
14.2%
-25.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 593 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The following rejections are withdrawn in view of new grounds of rejection necessitated by applicant’s amendments: Claim(s) 1, 3, 4, 6, 7, 13, 22 and 23 rejected under 35 U.S.C. 103 as being unpatentable over Jiang et al (“Visual Font Pairing”, published: August 2020, publisher: IEEE, pages 2086-2097) in view of Liu et al (US Application: US 20190138860, published: May 9, 2019, filed: Nov. 8, 2017) in view of Schowtka et al (US Patent: 8595627, issued: Nov. 26, 2013, filed: Jan. 7, 2008). Claim(s) 2 rejected under 35 U.S.C. 103 as being unpatentable over Jiang et al (“Visual Font Pairing”, published: August 2020, publisher: IEEE, pages 2086-2097) in view of Liu et al (US Application: US 20190138860, published: May 9, 2019, filed: Nov. 8, 2017), in view of Schowtka et al (US Patent: 8595627, issued: Nov. 26, 2013, filed: Jan. 7, 2008) and in view of Krishnamurthy et al (US Application: US 20180300609, published: Oct. 18, 2018, filed: Apr. 13, 2017). Claim(s) 5 rejected under 35 U.S.C. 103 as being unpatentable over Jiang et al (“Visual Font Pairing”, published: August 2020, publisher: IEEE, pages 2086-2097) Liu et al (US Application: US 20190138860, published: May 9, 2019, filed: Nov. 8, 2017) in view of Schowtka et al (US Patent: 8595627, issued: Nov. 26, 2013, filed: Jan. 7, 2008) and in view of Rezgui (US Application: US 2017/0262416, published: Sep. 14, 2017, filed: Mar. 3, 2017). Claim(s) 8, 9, and 14 - 16 rejected under 35 U.S.C. 103 as being unpatentable over Jiang et al (“Visual Font Pairing”, published: August 2020, publisher: IEEE, pages 2086-2097) Liu et al (US Application: US 20190138860, published: May 9, 2019, filed: Nov. 8, 2017) in view of Schowtka et al (US Patent: 8595627, issued: Nov. 26, 2013, filed: Jan. 7, 2008) and in view of Kadia et al (US Application: US 2021/0103632, published: Apr. 8, 2021, filed: Oct. 8, 2019). Claim(s) 10-12 rejected under 35 U.S.C. 103 as being unpatentable over Jiang et al (“Visual Font Pairing”, published: August 2020, publisher: IEEE, pages 2086-2097) Liu et al (US Application: US 20190138860, published: May 9, 2019, filed: Nov. 8, 2017) in view of Schowtka et al (US Patent: 8595627, issued: Nov. 26, 2013, filed: Jan. 7, 2008) and in view of Kaasila et al (EP 2 857 983 A2, published: Aug. 4, 2015). Claim(s) 21 rejected under 35 U.S.C. 103 as being unpatentable over Jiang et al (“Visual Font Pairing”, published: August 2020, publisher: IEEE, pages 2086-2097) , Liu et al (US Application: US 20190138860, published: May 9, 2019, filed: Nov. 8, 2017) in view of Schowtka et al (US Patent: 8595627, issued: Nov. 26, 2013, filed: Jan. 7, 2008), and in view of Naik et al (US Application: US 2011/0289407, published: Nov. 24, 2011, filed: May 18, 2010). Claim 24 rejected under 35 U.S.C. 103 as being unpatentable over Jiang et al (“Visual Font Pairing”, published: August 2020, publisher: IEEE, pages 2086-2097) , Liu et al (US Application: US 20190138860, published: May 9, 2019, filed: Nov. 8, 2017) in view of Schowtka et al (US Patent: 8595627, issued: Nov. 26, 2013, filed: Jan. 7, 2008), and in view of Wang et al (US Application: US 2017/0098140, published: Apr. 6, 2017, filed: Oct. 6, 2015). Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-16 and 21-24 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. With regards to claims 1, 15 and 22, they first recite “producing a font recommendation … the font recommendation including at least one font from the plurality of fonts that is dissimilar …” and then subsequently , there is a recitation of “ … to replace the glyph of the input font with a corresponding glyph of a selected font from the plurality of fonts …”. This selected font limitation is indefinite as ‘the plurality of fonts’ is also mentioned in the ‘determining an amount …’ step, and thus there are multiple references to the plurality of fonts and it is unclear if the ‘outputting’ step is actually referencing the plurality of fonts that include the at least one font that is determined to be dissimilar. The examiner will assume for purposes of examination that the outputting step has no dependence upon the at least one font that was determined to be dissimilar in the ‘producing’ step. Should the applicant require otherwise, then the examiner suggests the applicant consider creating /setting up a set of plurality of fonts that include the at least one font that was determined to be dissimilar. With regards to claims 2-14, 16 and 23-24, they do not resolve the deficiencies of their corresponding independent claim(s), and thus they are rejected under similar rationale as their respective independent claims. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 3, 4, 6, 7, 13, 15, 16, and 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jiang et al (“Visual Font Pairing”, published: August 2020, publisher: IEEE, pages 2086-2097) in view of Liu et al (US Application: US 20190138860, published: May 9, 2019, filed: Nov. 8, 2017) in view of Schowtka et al (US Patent: 8595627, issued: Nov. 26, 2013, filed: Jan. 7, 2008) in view of Pao et al (US Application: US 20170262414, published: Sep. 14, 2017, filed: Mar. 10, 2016). With regards to claim 1. Jiang et al teaches a method comprising: receiving, by a processing device, an item of digital content including an input font (page 2089: an item / document that contains a header and a body section is received for processing for training a scoring function. Subsequently, after training, a document having a header is encoded also encoded as ‘xq’ for input into the trained scoring function); [obtaining] … by the processing device, a font representation of the font including [an] encoding [of] the font … (page 2089, right column: a query header font q is encoded as a feature vector ‘xq’); determining, by the processing device, an amount of similarity of the font [representation] of the input font, to a plurality of said font [representations] of a plurality of fonts (page 2089 - right column, page 2090 - left column : a pairing similarity score is generated to rank a plurality of body font representations based upon ‘xq’. As further explained in page 2090: “Suppose we are querying which body fonts Pq = {yq1, yq2,...} will go with a header font xq. We first find the top K1 nearest header fonts [x1, . . ., x i, . . ., x K1 ], based on the cosine similarity cos(xq, xi) between xq and all the training headers {x1, . . ., xi, . . ., xm}. Each header x i has a list of body fonts that pair with it, i.e., P i = {y i1, y i2,...}. Note that fonts may repeat in this list, so that the popularity of pairings can be captured in the data. The fonts in P = {P’ 1, . . .,P’ i, . . .,P’ K1 } are regarded as candidate body fonts for pairing xq” ); producing, by the processing device, a font recommendation based on the determining , the font recommendation including at least one font from the plurality of fonts that are dissimilar to each other and visually different from the input font (pages 2089 (right column) and 2090 – right column: font recommendation includes different identified fonts (differently-identified/’not-similar-in-identification’ and these recommended fonts are based upon learning in a supervised manner, visual relationship of font pairs in a training set); and outputting, by the processing device, the recommendation in the user interface by previewing a respective appearance of each of the fonts … (page 2090 – right column and Fig 6: body fonts could be output as candidates/(‘recommendations’). However although Jiang implicitly teaches the font being represented in an alternative format (as an encoded vector, since as explained in Fig. 1, page 2086 and 2089, a font having visual characteristics is represented in an alternate form having feature vector ‘xq’), Jiang et al does not explicitly teach generating … a font representation of the font including encoding of the input font using machine learning . Additionally Jiang et al does not expressly teach … including a glyph of an input font in the digital content that is to be edited from a user interface; determining an amount of similarity of the font encoding of the input font, respectively , with respect to a plurality of font encodings of fonts, the determining based on measured distances in an embedded space between the font encoding of the input font and the plurality of font encodings of the plurality of fonts, respectively; … the input font and the at least one font having a respective measured distance of at least a threshold distance apart in the embedding space; … the font recommendation in the user interface by previewing a respective appearance of each of the plurality of fonts such that the digital content is modified to replace the glyph of the input font with a corresponding glyph of a selected font from the plurality of fonts when the respective appearance of the font is selected from the user interface. Yet Liu et al teaches generating … a font representation of the font including encoding the font using an encoder machine learning model (Fig. 11, paragraphs 0041, 0081-0084, 0093 and 0094, 0101: : using a computer, which a processor and memory, font processing is implemented and the processing includes extraction of features that are visually included from an input of digital content and those features (which can include at least glyph curvature/shape (interpreted as geometry), glyph size, glyph height, glyph location and glyph spacing) are then encoded as generated feature vectors. The feature vector(s) are then referenced by a another different model network (this network being a trainable/tunable- classification neural node network). It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to have modified Jiang et al’s ability to use a recommendation model to produce font recommendations/candidates by including/obtaining an alternate representation of a font (as an encoded font), such that the recommendations can be implemented using a computer system (having a processor and memory) to produce candidate fonts by further obtaining/generating the encoded font through an encoding process using an encoder for use by the recommendation/candidate model as taught by Liu et al. The combination would have allowed Jiang et al to have efficiently performed font recognition in an accurate and stable manner (Liu et al, paragraph 0006) . However the combination of Jiang et al and Liu et al does not expressly teach … … including a glyph of a font in the digital content that is to be edited from a user interface; … the font recommendation in the user interface by previewing a respective appearance of each of the plurality of fonts such that the digital content is modified to replace the glyph of the input font with a corresponding glyph of a selected font from the plurality of fonts when the respective appearance of the font is selected from the user interface Yet Schowtka et al teaches … including a glyph of a font in the digital content that is to be edited from a user interface (Fig. 10, column 9, lines 16-32: digital content is shown on the right having individual characters rendered as font elements (glyphs) and those displayed characters can be visually edited/replaced with alternative font elements); … … the font recommendation in the user interface by previewing a respective appearance of each of the plurality of fonts such that the digital content is modified to replace the glyph of the input font with a corresponding glyph of a selected font from the plurality of fonts when the respective appearance of the font is selected from the user interface (Fig. 10, column 9, lines 16-32: a list of candidate font schemes are shown for a user to select an individual scheme and in response to a selection, the font elements/glyphs previously displayed are then updated/edited to display the new font elements/glyphs of the selected font scheme). It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to have modified Jiang et al and Liu et al’s ability to preview a font recommendation candidates (having glyphs) based upon similarity to other font representation(s), such that the preview would have allowed digital content to be modified in the preview to replace a prior font’s elements with selection of one of the listed candidates, as taught by Schowtka et al. The combination would have allowed implemented a more efficient and user-friendly system … to allow a user to consider and select among customization choices for the user’s document (Schowtka et al, column 2, lines 55-57). However the combination of Jiang et al, Liu et al and Schowtka et al does not expressly teach determining an amount of similarity of the font encoding of the input font, respectively , with respect to a plurality of font encodings of fonts, the determining based on measured distances in an embedded space between the font encoding of the input font and the plurality of font encodings of the plurality of fonts, respectively; … the input font and the at least one font having a respective measured distance of at least a threshold distance apart in the embedding space … Yet Pao et al teaches determining an amount of similarity of the font encoding of the input font, respectively , with respect to a plurality of font encodings of fonts, the determining based on measured distances in an embedded space between the font encoding of the input font and the plurality of font encodings of the plurality of fonts, respectively; … the input font and the at least one font having a respective measured distance of at least a threshold distance apart in the embedding space … (paragraphs 0038, 0032 and 0033: fonts are encoded as feature vectors and a distance is calculated as a font similarity/dissimilarity score and a requirement can be set for a maximum amount of similarity (which can encompass a low value for maximum similarity) to yield a level of dissimilarity ) It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to have modified Jiang et al, Liu et al, and Schowtka et al’s ability to produce fonts that are different from each other based upon assessing input font data, such that a metric could be set for how similar/dissimilar the input and other fonts may be, as taught by Pao et al. The combination would have allowed Jiang et al, Liu et al and Schowtka et al to have identified and located fonts based upon a given input font in an efficient manner. With regards to claim 3, which depends on claim 1, the combination of Jiang et al , Liu et al, Schowtka et al and Pao et al teaches wherein the generating includes detecting at least one font feature describing how glyphs of the input font are included as part of the digital content (as similarly explained in the rejection of claim 1, paragraph 0041 of Liu et al was explained to teach font features describing how glyphs are included in the digital content, include at least glyph curvature/shape (interpreted as geometry), glyph height and glyph spacing), and is rejected under similar rationale. With regards to claim 4, which depends on claim 3 the combination of Jiang et al , Liu et al, Schowtka et al and Pao et al teaches wherein the at least one font feature includes text features, geometry features, or style features of the input font within the digital content (as similarly explained in the rejection of claim 1, paragraph 0041 of Liu et al was explained to teach font features describing how glyphs are included in the digital content, include at least glyph curvature/shape (interpreted as geometry), glyph contrast , glyph size, glyph height, glyph location and glyph spacing), and is rejected under similar rationale. With regards to claim 6, which depends on claim 4, the combination of Jiang et al , Liu et al, Schowtka et al and Pao et al teaches wherein the geometry features include at least one of text height, text width, area, font size, rotation, or location within the digital content (as similarly explained in the rejection of claim 1, paragraph 0041 of Liu et al was explained to teach font features describing how glyphs are included in the digital content, include at least glyph curvature/shape (interpreted as geometry), glyph height, glyph location in relation to a baseline, and glyph spacing), and is rejected under similar rationale. With regards to claim 7, which depends on claim 4, the combination of Jiang et al , Liu et al, Schowtka et al and Pao et al teaches wherein the style features include at least one of color, opacity, backing shape, layout, spacing , padding , tracking or stroke weight (as similarly explained in the rejection of claim 1, paragraph 0041 of Liu et al was explained to teach font features describing how glyphs are included in the digital content, include at least glyph curvature/shape (interpreted as geometry), glyph height, glyph location in relation to a baseline, and glyph spacing), and is rejected under similar rationale. With regards to claim 13. (Original) The method as described in claim 1, Jiang et al teaches wherein the producing the recommendation, as similarly explained in the rejection of claim 1. And is rejected under similar rationale. Additionally Jiang et al further explains wherein the producing the recommendation is based at least in part on a respective font popularity of the at least one font from the plurality of fonts (page 2090, right column: body font recommendation is based upon frequency/popularity and how widely the fonts are used). With regards to claim 15, the combination of Jiang et al , Liu et al, Schowtka et al and Pao et al teaches the limitations of claim 15 as similarly explained in the rejection of claim 1, and is rejected under similar rationale. With regards to claim 16. (Original) The one or more computer-readable non-transitory storage media as described in claim 15, the combination of Jiang et al , Liu et al, Schowtka et al and Pao et al teaches wherein the generating, as similarly explained in the rejection of claim 15, and is rejected under similar rationale Additionally, the combination of Jiang et al , Liu et al, Schowtka et al and Pao et al teaches wherein the generating includes detecting at least one font feature describing how glyphs of the input font are included as part of the item of digital content, as explained in the rejection of claim 15, (Fig. 11, paragraphs 0041, 0081-0084, 0093 and 0094, 0101, 0102 of Liu et al : using a computer, which a processor and memory, font processing is implemented and the processing includes extraction of features that are visually included from an input of digital content and those features are then encoded with an encoder as generated feature vectors. The feature vector(s) are then referenced by a another different model network (this network being a trainable/tunable- classification neural node network. The values in a feature vector represent how glyphs are included from the input (such a glyph curvature, spacing, size, etc.)), and is rejected under similar rationale. With regards to claim 22, the combination of the combination of Jiang et al , Liu et al, Schowtka et al and Pao et al teaches a computing device comprising: a processing device; and a computer-readable storage media storing instructions that, responsive to execution by the processing device, causes the processing device to perform operations including: receiving an item of digital content including a glyph of an input font in the digital content that is to be edited from a user interface; generating a font encoding of the input font using machine learning; determining an amount of similarity of the font encoding of the input font, respectively, with respect to a plurality of font encodings of a plurality of fonts, the determining based on measured distances in an embedding space between the font encoding of the input font and the plurality of font encodings of the plurality of fonts, respectively; producing a font recommendation based on the determining, the font recommendation including at least one font from the plurality of fonts that is dissimilar to and visually different from the input font, the input font and the at least one font having a respective measured distance of at least a threshold distance apart in the embedding space; and outputting the font recommendation in the user interface by previewing a respective appearance of each of the plurality of fonts such that the digital content is modified to replace the glyph of the input font with a corresponding glyph of a selected font from the plurality of fonts when the respective appearance of the selected font is selected from the user interface, as similarly explained in the rejection of claim 1, and is rejected under similar rationale. Claim(s) 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jiang et al (“Visual Font Pairing”, published: August 2020, publisher: IEEE, pages 2086-2097) in view of Liu et al (US Application: US 20190138860, published: May 9, 2019, filed: Nov. 8, 2017), in view of Schowtka et al (US Patent: 8595627, issued: Nov. 26, 2013, filed: Jan. 7, 2008) in view of Pao et al (US Application: US 20170262414, published: Sep. 14, 2017, filed: Mar. 10, 2016) and in view of Krishnamurthy et al (US Application: US 20180300609, published: Oct. 18, 2018, filed: Apr. 13, 2017). With regards to claim 2. (Original) The method as described in claim 1, Jiang et al and Liu teaches wherein the generating includes forming an initial encoding of the input font using an the encoder machine learning model and generating the font encoding of the input font … using an auto-encoder model (as similarly explained in the rejection of claim 1, Jiang was explained to teach an encoding as xq , of the font, which is interpreted as an ‘initial encoding’ and Jiang et al’s encoded data was modified with Liu et al’s teachings, such that the encoding was automatically generated via an encoder to produce the encoded data ), and is rejected under similar rationale. However the combination of Jiang et al , Liu et al, Schowtka et al and Pao et al does not expressly teach generating … the … encoding … by compressing the initial encoding using an auto-encoder model. Yet Krishnamurthy et al teaches generating … encoding … by compressing the initial encoding using an auto-encoder model (paragraph 0057: a dimension reduction module is used to automatically perform encoding to create a compressed representation of the input data). It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to have modified the combination of Jiang et al , Liu et al, Schowtka et al and Pao et al’s ability to encode font data into a feature vector, such that the vector dimensionality could have been reduced, as taught by Krishnamurthy et al. The combination would have allowed Jiang et al, Liu and Schowtka et al to have made learning … representation vectors feasible when data sets are large (Krishnamurthy et al, paragraph 0055). With regards to claim 23, which depends on claim 2, the combination of the combination of Jiang et al , Liu et al, Schowtka et al and Pao et al teaches wherein at least one of the encoder machine learning model or the auto-encoder model comprises a first neural network trained for font encoding; and the machine learning includes training a recommendation machine learning model that comprises a second neural network that is different than the first neural network and trained for font recommending, as similarly addressed and explained in the rejection of claim 1 (Fig. 11, paragraph 0081-0084, 0093 and 0094, 0101, 0102 of Liu et al : using a computer, which a processor and memory, font processing is implemented and the processing includes extraction of features that are visually included from an input of digital content and those features are then encoded with an encoder as generated feature vectors. The feature vector(s) are then referenced by a another different model network (this network being a trainable/tunable- classification neural node network. ), and is rejected under similar rationale. Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jiang et al (“Visual Font Pairing”, published: August 2020, publisher: IEEE, pages 2086-2097) Liu et al (US Application: US 20190138860, published: May 9, 2019, filed: Nov. 8, 2017) in view of Schowtka et al (US Patent: 8595627, issued: Nov. 26, 2013, filed: Jan. 7, 2008) in view of Pao et al (US Application: US 20170262414, published: Sep. 14, 2017, filed: Mar. 10, 2016) and in view of Rezgui (US Application: US 2017/0262416, published: Sep. 14, 2017, filed: Mar. 3, 2017). With regards to claim 5, which depends on claim 4, the combination of the combination of Jiang et al , Liu et al, Schowtka et al and Pao et al teaches wherein the text features (as similarly explained in the rejection of claim 4, paragraph 0041 of Liu et al was explained to teach the glyphs used include contrast features), and is rejected under similar rationale. However the combination does not expressly teach the text features include at least one of a character count, word count, sentence count, line count or word length. Yet Rezgui teaches the text features include a character count, word count, sentence count, line count or word length (paragraph 0060: a plurality of features are identified/learned from a document (claimed ‘item’) that relate to text and fonts associated with the text, and those features include at least number of words and/or font size/geometry). It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to have modified the combination of Jiang et al , Liu et al, Schowtka et al and Pao et al’s ability to identify text feature and font feature data used in a document section, such that the identified features would have further included text features and geometric features, as taught by Rezgui. The combination would have allowed Jiang et al, Liu and Schowtka et al to have interpreted original document data to automatically produce target data to help provide greater context and interpretation of the original contents. Claim(s) 8, 9, and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jiang et al (“Visual Font Pairing”, published: August 2020, publisher: IEEE, pages 2086-2097) Liu et al (US Application: US 20190138860, published: May 9, 2019, filed: Nov. 8, 2017) in view of Schowtka et al (US Patent: 8595627, issued: Nov. 26, 2013, filed: Jan. 7, 2008) in view of Pao et al (US Application: US 20170262414, published: Sep. 14, 2017, filed: Mar. 10, 2016) and in view of Kadia et al (US Application: US 2021/0103632, published: Apr. 8, 2021, filed: Oct. 8, 2019). With regards to claim 8. (Original) The method as described in claim 1, the combination of Jiang et al , Liu et al, Schowtka et al and Pao et al teaches wherein the generating… includes determining … of the digital content that includes the input font (as similarly explained in the rejection of claim 1, Jiang et al teaches determining font feature data), and is rejected under similar rationale. However the combination of Jiang et al , Liu et al, Schowtka et al and Pao et al does not expressly teach … includes determining a document feature of the digital content that includes the input font. Yet Kadia et al teaches … includes determining a document feature of the digital content that includes the input font (abstract: document feature(s) identified that includes the font include one or more digital images). It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to have modified the combination of Jiang et al , Liu et al, Schowtka et al and Pao et al’s ability to generate a font representation by identifying and encoding feature data, such that the feature data would have further included identifying image(s) as document feature(s) of the digital content as taught by Kadia et al. The combination would have allowed Jiang et al to have provided an intuitive way for a user to choose from recommended fonts (Kadia et al, paragraph 0003). With regards to claim 9. (Original) The method as described in claim 8, the combination of the combination of Jiang et al , Liu et al, Schowtka et al and Pao et al and Kadia et al teaches wherein the document feature includes at least one of document height, document width, layout, text counts, digital images, shapes, or groups (as similarly explained in the rejection of claim 8, Kadia et al teaches the feature data identified/collected includes taking into context images in the document), and is rejected under similar rationale. With regards to claim 14. (Original) The method as described in claim 1, the combination of Jiang et al , Liu et al, Schowtka et al and Pao et al teaches wherein the machine learning includes training a recommendation machine learning model for font recommending, … as similarly explained in the rejection of claim 1, and is rejected under similar rationale. However the combination of Jiang et al , Liu et al, Schowtka et al and Pao et al does not expressly teach the font recommending is based at least in part on compatibility with a topic extracted from the digital content. Yet Kadia et al teaches the font recommending is based at least in part on compatibility with a topic extracted from the digital content (paragraph 0089-0091: a topic such as a birthday is gleaned/extracted from the document content and recommendation for font tags are produced). It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to have modified the combination of Jiang et al , Liu et al, Schowtka et al and Pao et al’s ability to produce a recommendation based upon digital content analysis, such that the content analyzed further includes text from the document, as taught by Kadia et al. The combination would have allowed Jiang et al to have provided an intuitive way for a user to choose from recommended fonts. Claim(s) 10-12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jiang et al (“Visual Font Pairing”, published: August 2020, publisher: IEEE, pages 2086-2097) Liu et al (US Application: US 20190138860, published: May 9, 2019, filed: Nov. 8, 2017) in view of Schowtka et al (US Patent: 8595627, issued: Nov. 26, 2013, filed: Jan. 7, 2008) in view of Pao et al (US Application: US 20170262414, published: Sep. 14, 2017, filed: Mar. 10, 2016) and in view of Kaasila et al (EP 2 857 983 A2, published: Aug. 4, 2015). With regards to claim 10. (Original) The method as described in claim 1, Jiang et al teaches wherein the generating includes determining … of the input font (as similarly explained in the rejection of claim 1, Jiang et al teaches font feature data is determined and encoded in a vector), and is rejected under similar rationale. However the combination of Jiang et al , Liu et al, Schowtka et al and Pao et al does not expressly teach determining a font metric based on glyphs of the input font. Yet Kaasila et al teaches determining a font metric based on glyphs of the input font (paragraphs 0046 and 0050: font metric features determined for a font includes features such as font class (serif) and/or cap height (Uppercase/Cap)). It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to have modified the combination of Jiang et al , Liu et al, Schowtka et al and Pao et al’s ability to determine font feature data of the font in the document, such that the font feature data would have further included a plurality of additional font metric feature data, as taught by Kaasila et al. The combination would have allowed Jiang et al to have implemented an effective way to quantify font similarity for identification and presentation. With regards to claim 11. (Original) The method as described in claim 10, the combination of the combination of Jiang et al , Liu et al, Schowtka et al, Pao et al and Kaasila et al teaches wherein the font metric includes at least one of ascender height, descender height, digit height, lowercase contrast, lowercase height, lowercase stem height, lowercase stem width, lowercase weight, lowercase width, uppercase contrast, uppercase height, stem height, uppercase stem width, uppercase weight, or uppercase width (as similarly explained in the rejection of claim 10, Kaasila et al was explained to teach uppercase/cap height is identified as feature data), and is rejected under similar rationale. With regards to claim 12. (Original) The method as described in claim 10, the combination of the combination of Jiang et al , Liu et al, Schowtka et al, Pao et al , and Kaasila et al teaches wherein the font metric includes a font class, to which, the input font belongs (as similarly explained in the rejection of claim 10, Kaasila et al was explained to teach the font metric includes a font class ‘serif’ to which the font belongs), and is rejected under similar rationale. Claim(s) 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jiang et al (“Visual Font Pairing”, published: August 2020, publisher: IEEE, pages 2086-2097) , Liu et al (US Application: US 20190138860, published: May 9, 2019, filed: Nov. 8, 2017) in view of Schowtka et al (US Patent: 8595627, issued: Nov. 26, 2013, filed: Jan. 7, 2008), in view of Pao et al (US Application: US 20170262414, published: Sep. 14, 2017, filed: Mar. 10, 2016) and in view of Naik et al (US Application: US 2011/0289407, published: Nov. 24, 2011, filed: May 18, 2010). With regards to claim 21, which depends on claim 1, the combination of the combination of Jiang et al , Liu et al, Schowtka et al and Pao et al teaches … the corresponding glyph … of the selected font (as similarly explained in the rejection of claim 1, Jiang et al teaches candidate glyphs of said font representations are identified based upon similarity characteristic(s) of the font representation to said font representations, and Jiang et al’s list of candidates were modified to be selectable (to identify the corresponding glyph), as taught by Schowtka. Also explained in the rejection of claim 1, paragraph 0041 of Liu et al explains that the glyph size is included as a font feature used to determine font identification), and is rejected under similar rationale. However the combination does not expressly teach wherein a size and color of the corresponding glyph is based on a size and color of the glyph of the input font. Yet Naik et al teaches wherein a size and color of the corresponding glyph is based on a size and color of the glyph of the input font (paragraphs 0042, 0044, 0061: characteristics such as font size of training documents and font color can be identified to help determine fonts having characteristics such as color and size to recommend). It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to have modified Jiang et al, Liu et al and Schowtka’s ability to identify and encode feature data used in a document, such that the type of feature data identified would have further included size and color, as taught by Naik et al. The combination would have allowed Jiang et al, and Schowtka to have helped users choose fonts when composing documents (Naik et al, paragraph 0002). Claim 24 is rejected under 35 U.S.C. 103 as being unpatentable over Jiang et al (“Visual Font Pairing”, published: August 2020, publisher: IEEE, pages 2086-2097) , Liu et al (US Application: US 20190138860, published: May 9, 2019, filed: Nov. 8, 2017) in view of Schowtka et al (US Patent: 8595627, issued: Nov. 26, 2013, filed: Jan. 7, 2008), in view of Pao et al (US Application: US 20170262414, published: Sep. 14, 2017, filed: Mar. 10, 2016), in view of Krishnamurthy et al (US Application: US 20180300609, published: Oct. 18, 2018, filed: Apr. 13, 2017) and in view of Wang et al (US Application: US 2017/0098140, published: Apr. 6, 2017, filed: Oct. 6, 2015). With regards to claim 24, which depends on claim 23, the combination of the combination of Jiang et al , Liu et al, Schowtka et al, Pao et al and Krishnamurthy et al teaches wherein the first neural network comprises a deep learning network and the second neural network , as similarly explained in the rejection of claim 1, and is rejected under similar rationale. However the combination does not expressly teach the second neural network comprises a Siamese neural network. Yet Wang et al teaches the recommendation machine learning model comprises a Siamese neural network (Fig. 11, paragraph 0098: a font recommendation system is associated with a Siamese network trained to compare fonts for identification). It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to have modified the combination of Jiang et al , Liu et al, Schowtka et al, Pao et al and Krishnamurthy et al’s ability to implement an encoding model and recommendation model, such that the type of recommendation model applied could have been a Siamese neural network model, as taught by Wang et al. The combination would have allowed the combination of Jiang et al , Liu et al, Schowtka et al, Pao et al and Krishnamurthy et al to have efficiently recognized fonts and identified similar fonts that would be visually pleasing to users (Wang et al, paragraph 0002) Response to Arguments Applicant's arguments filed 07/30/2025 have been fully considered but they are not persuasive. The applicant’s arguments directed to the newly amended subject matter in independent claims 1, 15 and 22 for being allowable are not persuasive in view of new grounds of rejection necessitated by applicant’s amendments The applicant’s arguments directed to claims 2-15, 16, 21, 23 and 24 for being allowable are not persuasive since the independent claims have been shown/explained to be rejected above. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILSON W TSUI whose telephone number is (571)272-7596. The examiner can normally be reached Monday - Friday 9 am -6 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Renee Chavez can be reached at 571-270-1104. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WILSON W TSUI/Primary Examiner, Art Unit 2172
Read full office action

Prosecution Timeline

Jun 24, 2022
Application Filed
Nov 11, 2022
Response after Non-Final Action
Sep 09, 2024
Non-Final Rejection — §103, §112
Oct 17, 2024
Interview Requested
Oct 29, 2024
Response Filed
Oct 29, 2024
Examiner Interview Summary
Oct 29, 2024
Examiner Interview (Telephonic)
Feb 28, 2025
Examiner Interview (Telephonic)
Mar 03, 2025
Final Rejection — §103, §112
Mar 18, 2025
Interview Requested
Mar 28, 2025
Examiner Interview (Telephonic)
Mar 31, 2025
Request for Continued Examination
Mar 31, 2025
Examiner Interview Summary
Apr 02, 2025
Response after Non-Final Action
May 17, 2025
Non-Final Rejection — §103, §112
Jul 01, 2025
Interview Requested
Jul 17, 2025
Examiner Interview (Telephonic)
Jul 26, 2025
Examiner Interview Summary
Jul 30, 2025
Response Filed
Oct 31, 2025
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602535
COMMENT DISPLAY METHOD AND APPARATUS OF A DOCUMENT, AND DEVICE AND MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12589766
AUTONOMOUS DRIVING SYSTEM AND METHOD OF CONTROLLING SAME
2y 5m to grant Granted Mar 31, 2026
Patent 12570284
AUTONOMOUS DRIVING METHOD AND DEVICE FOR A MOTORIZED LAND VEHICLE
2y 5m to grant Granted Mar 10, 2026
Patent 12552376
VEHICLE CONTROL APPARATUS
2y 5m to grant Granted Feb 17, 2026
Patent 12511993
SYSTEMS AND METHODS FOR CONFIGURING A HIERARCHICAL TRAFFIC MANAGEMENT SYSTEM
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
62%
Grant Probability
99%
With Interview (+58.1%)
4y 0m
Median Time to Grant
High
PTA Risk
Based on 593 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month