Prosecution Insights
Last updated: April 19, 2026
Application No. 18/725,688

CHARACTER GENERATION METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM

Non-Final OA §103§112
Filed
Jun 28, 2024
Examiner
BASHIR, ADEEL
Art Unit
2616
Tech Center
2600 — Communications
Assignee
BEIJING ZITIAO NETWORK TECHNOLOGY CO., LTD.
OA Round
1 (Non-Final)
94%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 94% — above average
94%
Career Allow Rate
33 granted / 35 resolved
+32.3% vs TC avg
Moderate +7% lift
Without
With
+7.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
14 currently pending
Career history
49
Total Applications
across all art units

Statute-Specific Performance

§101
5.0%
-35.0% vs TC avg
§103
85.0%
+45.0% vs TC avg
§102
8.3%
-31.7% vs TC avg
§112
0.8%
-39.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 35 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Priority Acknowledgment is made of applicant’s foreign priority claim, for U.S. Application No. 18/725,688, based on a foreign application filed on 12/29/2021. Status of Claims Claims 1-11, and 13–21 are pending in the application. Claims 1-4 and 13-17 are rejected. Claim 12 is canceled. Claims 5-11, 18-21 are objected to. Allowable Subject Matter Claims 5-11 and 18-21 are objected to as being dependent upon a rejected base claim(s), but would be allowable if rewritten in independent form including all of the limitations of the base claim(s) and any intervening claim(s). Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 14 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. In particular, the preamble recites: “A storage medium”. However, it is respectfully noted that the specification has separate and distinct definitions for both a computer-readable storage medium and a machine-readable storage medium (paragraphs [0149] and [0160], respectively in the published version of the disclosure). Thus, the overall claim scope is unclear for determining eligibility under 35 USC 101 (with respect to possible signal embodiments). Applicant is encouraged to amend claim 14 in the next response to recite “A non-transitory computer-readable storage medium” (as one way to avoid a possible signal embodiment interpretation with respect to 35 USC 101). Overview of Prior Art Grounds of Rejection Ground of Rejection Claim(s) Statute(s) Reference(s) Ground of Rejection 1 1–3, 13–16 § 103 Katz et al. (US20210089707A1) in view of Grosz et al. (US20140095586A1) Ground of Rejection 2 4, 17 § 103 Katz et al. (US20210089707A1) in view of Grosz et al. (US20140095586A1), further in view of Ozeki et al. (US20210201548A1) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. (Please see the cited paragraphs, sections, pages, or surrounding text in the references for the paraphrased content.) Ground of Rejection 1 Claims 1, 2, 3, 13, 14, 15, 16 are rejected under 35 U.S.C. § 103 as being unpatentable over Katz et al. (US20210089707A1) in view of Grosz et al. (US20140095586A1). As per Claim 1, Katz teaches the following portion of Claim 1, which recites: “A method for generating a character, comprising:” obtaining a character to be displayed and a pre-selected target style type;” Katz et al. (US20210089707A1) teaches obtaining (i) characters (via a base font template containing multiple characters) and (ii) a user-selected style input. Katz states that the personalization parameters “may be selected by a user,” and may include “an image… of a favorite painting(s)… a picture of an existing font… a Bitmoji®,” which corresponds to a pre-selected target style type. (Katz et al., ¶ [0033]). Katz also states the font template module includes “a single base font… including multiple characters 502 a-n,” which corresponds to obtaining a character to be displayed (i.e., character(s)/glyphs). (Katz et al., ¶ [0034]). Katz’s user-selected personalization parameters provide the pre-selected target style type, and Katz’s base font template “including multiple characters” provides the character to be displayed. Katz teaches the following portion of Claim 1, which recites: “converting the character to be displayed into a target character corresponding to the target style type, wherein the target character is generated in at least one of the following modes: generating the target character in advance on the basis of a style type conversion model and generating the target character in real time on the basis of a style type conversion model;” Katz et al. (US20210089707A1) teaches a style type conversion model (a style transfer module) that converts base-font content into a styled output font. Katz describes a “style application module 306 (e.g., a module implementing a Neural Style Transfer algorithm; NST)” that creates a personalized font. (Katz et al., ¶ [0032]). Katz further explains that the style application module synthesizes an output font that preserves base-font content while applying the selected style: the module samples “content” from the base font and obtains a “style” from the personalization parameters, then “synthesize[s] a personalized output font that exhibits the content of the base font applied with the style of the personalization parameter(s).” (Katz et al., ¶ [0036]). Katz also teaches the “generating the target character in advance” mode by storing the personalized font “for later use”: “The personalized font storage module 308 stores the personalized font… for later use.” (Katz et al., ¶ [0037]). Katz’s NST-based style application module is a style type conversion model that converts base-font character content into styled output (a target character corresponding to the target style type), and Katz’s storage “for later use” satisfies generating the result in advance on the basis of that model. Katz alone does not explicitly teach all the limitation(s) of the claim. However, when combined with Grosz, they collectively teach all of the limitation(s). Katz and Grosz teach the following portion of Claim 1, which recites: “displaying the target character on a target display interface.” Katz et al. (US20210089707A1) describes creation/storage of personalized fonts and their use in textual communications, but Grosz et al. (US20140095586A1) more directly teaches displaying styled text (characters) on a GUI display interface during editing. Grosz states: “In live editing… a user may mouse over a font style… and see the font change in text window 1001.” (Grosz et al., ¶ [0161]). Grosz’s “text window 1001” is a target display interface, and the on-screen rendering where the “font change” is shown corresponds to displaying the target character in the selected style. Before the effective filing date of the claimed invention, a POSITA would have been motivated to integrate Katz et al.’s NST-based personalized font creation and storage (a style conversion model that produces styled output fonts “for later use”) with Grosz et al.’s GUI-based text editing and preview that lets a user “see the font change in text window 1001.” This combination would predictably improve personalization and usability by allowing Katz’s model-generated fonts to be selected and rendered within Grosz’s editing/display workflow using standard font selection and rendering mechanisms, producing expected results (styled characters displayed on-screen) without changing the fundamental operation of either system. PNG media_image1.png 13 460 media_image1.png Greyscale As per Claim 2, Katz alone does not explicitly teach all the limitation(s) of the claim. However, when combined with Grosz, they collectively teach all of the limitation(s). Grosz teaches the following portion of Claim 2, which recites: “The method of claim 1, wherein obtaining a character to be displayed and a pre-selected target style type comprises: determining the target style type selected from a style type list in response to detecting that the character to be displayed is edited” Grosz et al. (US20140095586A1) teaches detecting text editing and, in response, selecting a style from a list. Grosz states: “a user has clicked on the text box 1001 to dynamically edit the text within.” (Grosz et al., ¶ [0158]). Grosz further states: “Text editing toolbar 1003 includes a font style selection menu 1004… a drop-down feature… may be leveraged to select from a list of included styles available to the editor.” (Grosz et al., ¶ [0160]). Grosz teaches the following portion of Claim 2, which recites: “wherein the style type list comprises a style type corresponding to the style type conversion model.” Katz et al. (US20210089707A1) teaches that the style conversion is performed by a model, and the selectable style inputs correspond to that model. Katz states the engine includes “a style application module 306 (e.g., a module implementing a Neural Style Transfer algorithm; NST).” (Katz et al., ¶ [0032]). Katz also states “The personalization parameters may be selected by a user,” and may include “an image… a picture of an existing font, a Bitmoji®, etc.” (Katz et al., ¶ [0033]). The user-selectable style inputs (Katz) correspond to the NST-based style conversion model (Katz), and those style types are suitable to be included among the selectable “included styles” list used during editing (Grosz). Before the effective filing date of the claimed invention, a POSITA would have been motivated to make Katz’s NST-driven personalized font styles (Katz et al., ¶¶ [0032]-[0033]) available as selectable entries in Grosz’s “list of included styles” presented during text editing (Grosz et al., ¶¶ [0158], [0160]), yielding predictable results (more personalized selectable styles during editing). PNG media_image1.png 13 460 media_image1.png Greyscale As per Claim 3, Katz teaches the limitation(s) of Claim 3 which recites: “The method of claim 1, wherein converting the character to be displayed into a target character corresponding to the target style type comprises: obtaining a target character consistent with the character to be displayed from a target character package corresponding to the target style type, wherein the target character package is generated after converting a plurality of characters into a target font on the basis of the style type conversion model; or inputting the character to be displayed into the style type conversion model to obtain a target character corresponding to the target font.” Katz et al. (US20210089707A1) teaches the first alternative (“package/library” approach). Katz describes generating a target character package (a personalized font file) by converting multiple characters of a base font using a style-transfer model, storing it for later use, and then using that stored font to display message characters (consistent content, changed style): Plurality of characters converted into a target font using a model: Katz’s font template “includes a single base font … including multiple characters 502 a-n,” and the style application module “creates a personalized font … represented by … including multiple characters 552 a-n.” (Katz et al., ¶¶ [0034]-[0035]). Style type conversion model: Katz’s style application module implements “Neural Style Transfer (NST)” and “synthesize[s] a personalized output font that exhibits the content of the base font applied with the style of the personalization parameter(s).” (Katz et al., ¶¶ [0032], [0036]). Target character package stored and later used to obtain/display characters: “The personalized font storage module 308 stores the personalized font… for later use,” and later “the retrieved personalized font can then be used to display the message.” (Katz et al., ¶¶ [0037], [0057]). Consistency of the character content (character to be displayed): Katz explains that other devices “display the textual message ‘Hello’ … in the personalized font of the user,” showing the same message characters rendered in the target style. (Katz et al., ¶ [0031]). PNG media_image1.png 13 460 media_image1.png Greyscale Device Claim 13 does not include any additional limitations that would significantly distinguish it from claim 1. Therefore, it is likewise rejected under 35 U.S.C. § 103 in view of the same references and for the same reasons set forth above. PNG media_image1.png 13 460 media_image1.png Greyscale Storage medium Claim 14 does not include any additional limitations that would significantly distinguish it from claim 1. Therefore, it is likewise rejected under 35 U.S.C. § 103 in view of the same references and for the same reasons set forth above. PNG media_image1.png 13 460 media_image1.png Greyscale Device Claim 15 does not include any additional limitations that would significantly distinguish it from claim 2. Therefore, it is likewise rejected under 35 U.S.C. § 103 in view of the same references and for the same reasons set forth above. PNG media_image1.png 13 460 media_image1.png Greyscale Device Claim 16 does not include any additional limitations that would significantly distinguish it from claim 3. Therefore, it is likewise rejected under 35 U.S.C. § 103 in view of the same references and for the same reasons set forth above. PNG media_image1.png 13 460 media_image1.png Greyscale Ground of Rejection 2 Claim 4 is rejected under 35 U.S.C. § 103 as being unpatentable over Katz et al. (US20210089707A1) in view of Grosz et al. (US20140095586A1), and further in view of Ozeki et al. (US20210201548A1). As per Claim 4, Katz alone does not explicitly teach all the limitation(s) of the claim. However, when combined with Ozeki, they collectively teach all of the limitation(s). Katz and Ozeki teach the following portion of Claim 4, which recites: “The method of claim 1, wherein the style type conversion model comprises a first font feature extraction sub-model, a second font feature extraction sub-model, a first decoupling model connected to the first font feature extraction sub-model, a second decoupling model connected to the second font feature extraction sub-model, a feature splicing sub-model connected to the first decoupling model and the second decoupling model, and a feature processing sub-model;” Katz et al. (US20210089707A1) teaches the portion reciting “a first font feature extraction sub-model” and “a second font feature extraction sub-model” and the separation of extracted information into content and style, stating: “feeds the base font … though a convolutional neural network (CNN),” and “The personalization parameter(s) are then fed through the same CNN,” with activations sampled to obtain “content” and “style.” (Katz et al., ¶ [0036]). Ozeki et al. (US20210201548A1) teaches the portion reciting “a feature splicing sub-model connected to the first decoupling model and the second decoupling model, and a feature processing sub-model,” by estimating “a transformation parameter between the extracted first feature amount and a second feature amount,” generating a feature amount of a complete font set “based on the estimated transformation parameter,” and “generat[ing] the second font set by converting the generated fourth feature amount … into an image.” (Ozeki et al., ¶ [0010]). Katz teaches the following portion of Claim 4, which recites: “wherein the first font feature extraction sub-model and the second font feature extraction sub-model have the same model structure, and are configured to determine character features of a plurality of characters respectively, and the character features comprise a style type feature and a character content feature;” Katz et al. (US20210089707A1) teaches “the same model structure” by stating the personalization parameters are fed through the “same CNN,” and teaches “a style type feature and a character content feature” by sampling to obtain “content” and “style.” (Katz et al., ¶ [0036]). Katz also teaches “a plurality of characters” by describing fonts “including multiple characters 502 a-n” and “including multiple characters 552 a-n.” (Katz et al., ¶¶ [0034]-[0035]). Katz teaches the following portion of Claim 4, which recites: “the first decoupling model is configured to decouple a character feature extracted by the first font feature extraction sub-model to distinguish the style type feature from the character content feature;” Katz et al. (US20210089707A1) teaches distinguishing style from content by sampling at different stages: activations sampled “at a late stage to obtain ‘content,’” and activations sampled “at an earlier stage … to obtain a ‘style.’” (Katz et al., ¶ [0036]). Katz teaches the following portion of Claim 4, which recites: “the second decoupling model is configured to decouple a character feature extracted by the second font feature extraction sub-model to distinguish the style type feature from the character content feature;” Katz et al. (US20210089707A1) also teaches this same style-versus-content distinction for the second extraction pass through the “same CNN,” again obtaining “content” and “style” by different-stage sampling. (Katz et al., ¶ [0036]). Ozeki teaches the following portion of Claim 4, which recites: “the feature splicing sub-model is configured to splice the character features extracted by the first decoupling model and the second decoupling model to obtain a corresponding character style feature;” Ozeki et al. (US20210201548A1) teaches combining feature information via an estimated “transformation parameter” between feature amounts and generating a transformed feature amount for a complete font set “based on the estimated transformation parameter.” (Ozeki et al., ¶ [0010]). Ozeki teaches the following portion of Claim 4, which recites: “and the feature processing sub-model is configured to process the character style feature to obtain the target character of the character to be displayed in the target style type.” Ozeki et al. (US20210201548A1) teaches generating the output characters in the target style by “generat[ing] the second font set by converting the generated fourth feature amount of the second font set into an image.” (Ozeki et al., ¶ [0010]). Before the effective filing date of the claimed invention, a POSITA would have been motivated to use Katz et al.’s CNN stage-based extraction of “content” versus “style” as the claimed decoupling of character content feature from style type feature (Katz et al., ¶ [0036]) and then apply Ozeki et al.’s transformation parameter pipeline as the claimed splicing/processing chain that transforms features and converts them into output character images (Ozeki et al., ¶ [0010]), because the combination predictably improves controllability and quality of generating target-style characters. PNG media_image1.png 13 460 media_image1.png Greyscale Device Claim 17 does not include any additional limitations that would significantly distinguish it from claim 4. Therefore, it is likewise rejected under 35 U.S.C. § 103 in view of the same references and for the same reasons set forth above. PNG media_image1.png 13 460 media_image1.png Greyscale Conclusion The prior art made of record and relied upon in this action is as follows: Patent Literature: Katz et al. (US20210089707A1) — “Personalized fonts” Ozeki et al. (US20210201548A1) — “Font creation apparatus, font creation method, and font creation program” Grosz et al. (US20140095586A1) — “Methods for Dynamic Stylization and Size Editing of Fonts Associated with Images and Theme-Based Graphics Arranged in a Layout Viewed Through an Electronic Interface” Non-Patent Literature (NPL): (none) Note: A PDF copy of each NPL reference is attached with this Office Action. URLs are included for applicant convenience. If a link becomes unavailable in the future, the citation information may be used to locate the reference or access archived versions via the Wayback Machine. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure and is listed as follows: Patent Literature: (none) Non-Patent Literature (NPL): (none) Any inquiry concerning this communication or earlier communications from the examiner should be directed to ADEEL BASHIR whose telephone number is (571) 270-0440. The examiner can normally be reached Monday-Thursday. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Hajnik can be reached on (571) 276-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ADEEL BASHIR/ Examiner, Art Unit 2616 /DANIEL F HAJNIK/Supervisory Patent Examiner, Art Unit 2616
Read full office action

Prosecution Timeline

Jun 28, 2024
Application Filed
Feb 07, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597209
USING POLYGON MESH RENDER COMPOSITES DURING NEURAL RADIANCE FIELD (NERF) GENERATION
2y 5m to grant Granted Apr 07, 2026
Patent 12586333
AUTOMATED METHOD FOR GENERATING PROSTHESIS FROM THREE DIMENSIONAL SCAN DATA, APPARATUS GENERATING PROSTHESIS FROM THREE DIMENSIONAL SCAN DATA AND COMPUTER READABLE MEDIUM HAVING PROGRAM FOR PERFORMING THE METHOD
2y 5m to grant Granted Mar 24, 2026
Patent 12586302
RENDERING HAIR
2y 5m to grant Granted Mar 24, 2026
Patent 12573126
SPLIT BOUNDING VOLUMES FOR INSTANCES
2y 5m to grant Granted Mar 10, 2026
Patent 12555280
VECTOR GRAPHICS BASED LIVE SKETCHING METHODS AND SYSTEMS
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
94%
Grant Probability
99%
With Interview (+7.4%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 35 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month