Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 03/08/2024, 08/26/2024 and 12/20/2014 are being considered by the examiner.
Drawings
The drawings filed on: 03/08/2024 are accepted.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “processing unit configured to perform …” in claim 19.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-9, 11, 12, 15, 19 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sokolov et al (US Application: US 20240303247, published: Sep. 12, 2024, filed: Mar. 6, 2023) in view of Cai et al (US Patent: 12141556, issued: Nov. 12, 2024, filed: Sep. 30, 2022).
With regards to claim 1, Sokolov et al teaches a non-transitory computer readable medium storing a software program comprising data and computer implementable instructions that when executed by at least one processor cause the at least one processor to perform operations for facilitating visual formatting of text through natural language, the operations comprising:
accessing a textual content in a natural language (Fig. 2a: a document is opened);
presenting the textual content to an individual in an initial visual format (the document is displayed (fig. 2A));
receiving from the individual a selection of a first portion of the textual content (Fig. 2C: a user makes a specific portion selection of the textual content);
receiving from the individual a first textual input in the natural language (paragraph 0055, 0056: the user can provide textual input (interpreted to include one or more words) as parameters in response to user prompt);
analyzing the first textual input to select a first visual format (Fig. 2C ref 155, Fig 3A-3D paragraphs 0055, 0056, 0061 and 0062: based on textual input, rewriting of content (visual format which can include items such as tone/style or format /design) can be selected and suggested to the user );
receiving from the individual a second textual input in the natural language (Fig. 2C, a user can make another selection of text in the document);
analyzing the second textual input to select a second portion of the textual content (Fig. 2C ref 155, Fig 3A-3D paragraphs 0055, 0056, 0061 and 0062: based on textual input, certain portions can be selected/identified for rewriting of content),
analyzing the second textual input to select a second visual format (Fig. 2C ref 155, Fig 3A-3D paragraphs 0055, 0056, 0061 and 0062: based on textual input, certain format such as tone/style, format /design or word-selection can be selected by the user from a recommendations area); and
altering the presentation of the textual content, wherein in the altered presentation the first portion is presented in the first visual format, the second portion is presented in the second visual format, and a third portion of the textual content is presented in the initial visual format ((Fig. 2C ref 155, Fig 3A-3D paragraphs 0055, 0056, 0061 and 0062: the textual content is modified through repeated cycles of the user selecting a first, second, or third portion of textual content from the document and each of those portions can be modified with recommended format such as tone/style, format/design or word-selection).
However Sokolov et al does not expressly teach wherein the second portion includes at least one word not included in the second textual input, and wherein the second textual input includes at least one word not included in the second portion
Yet Cai teaches wherein the [selected] portion includes at least one word not included in the [corresponding] textual input, and wherein the [corresponding] textual input includes at least one word not included in the [selected] portion (Fig. 1A, Fig 1B, column 7, lines 33-67 and column 8, lines 1-15: a portion of text within a selected original-paragraph includes a word (such as ‘slides’) that is not in the user’s textual rewrite-prompt/input and the textual rewrite-prompt/input includes a word ‘concrete’ that is not in the selected original-paragraph. Selection of a corresponding rewrite-visual format is identified as ‘friendly’ in style and the friendly style applied based upon referencing content of the initial/original paragraph format. It is further noted that rewrite portions/format can be based on any number of prior rewrite-visual-portions/format using a LLM chain to chain multiple prompts together).
It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to have modified Sokolov et al’s ability to iteratively select any number of portions (such as first, second, third, fourth , etc) textual content and process any number of textual inputs (such as first, second, third , fourth, etc) to select a visual re-write format/style, such that the each of one or more iteratively selected particular portion(s) (such as second, third , fourth, etc selected particular potions) could be based on their respective corresponding textual inputs (such as second, third, fourth corresponding textual inputs) and that each selective particular portion would have included a word not included in its corresponding textual input, while its corresponding textual input would have included a word not included in the particular selected portion, as taught by Cai. The combination would have allowed Sokolov et al to have made it easier to interactive and utilize LLMs to address and solve multi-task problems.
With regards to claim 2. The non-transitory computer readable medium of claim 1, Sokolov et al teaches wherein the operations further comprise: receiving from the individual a selection of a specific portion of the textual content through a pointing device (paragraph 0036 and 0052: a user provides a specific selection of the textual content); and analyzing the first textual input to select a sub-portion of the specific portion, thereby selecting the first portion (Fig 3B, paragraph 0038: a sub portion of text can be selected to update or rewrite words within the portion).
With regards to claim 3. The non-transitory computer readable medium of claim 1, Sokolov et al teaches wherein selection of the second visual format is further based on at least one word included in the second portion (Fig. 3B, paragraph 0061 and 0063: as shown at least one word within the second portion can be identified and rewritten).
With regards to claim 4. The non-transitory computer readable medium of claim 1, Sokolov et al and Cai et al teaches wherein the selection of the second visual format is further based on the initial visual format (as similarly explained in the rejection of claim 1, Cai et al teaches in Fig. 1A, Fig 1B, column 7, lines 33-67 and column 8, lines 1-15: Selection of a corresponding rewrite-visual format is identified as ‘friendly’ in style the friendly style applied based upon referencing content of the initial/original paragraph format. It is further noted that rewrite portions/format can be based on any number of prior rewrite-visual-portions/format using a LLM chain to chain multiple prompts together), and thus is rejected under similar rationale.
With regards to claim 5. The non-transitory computer readable medium of claim 1, Sokolov et al and Cai et al teaches wherein the selection of the second visual format is further based on the first visual format and the initial visual format (as similarly explained in the rejection of claim 1, Cai et al teaches in Fig. 1A, Fig 1B, column 7, lines 33-67 and column 8, lines 1-15: Selection of a corresponding rewrite-visual format is identified as ‘friendly’ in style the friendly style applied based upon referencing content of the initial/original paragraph format. It is further noted that rewrite portions/format can be based on any number of prior rewrite-visual-portions/format using a LLM chain to chain multiple prompts together), and thus is rejected under similar rationale.
With regards to claim 6. The non-transitory computer readable medium of claim 1, Sokolov teaches wherein the selection of the first visual format is further based on a particular word of the textual content, as similarly explained in the rejection of claim 1 (word(s) of textual content are referenced when applying a selected visual rewrite), and is rejected under similar rationale.
However Sokolov does not expressly teach … the selection of the first visual format is based on a particular word of the textual content not included in any one of the first portion or the second portion.
Yet Cai teaches … the selection of the first visual format is based on a particular word of the textual content not included in any one of the first portion or the second portion (Fig. 1A, Fig 1B, column 7, lines 33-67 and column 8, lines 1-15: Selection of a corresponding rewrite-visual format is identified as ‘friendly’ in style and the friendly style identified applied based upon referencing content of the initial/original paragraph format and the word ‘friendly’ is not in a portion of an ‘initial/original paragraph’. It is further noted that rewrite portions/format can be based on any number of prior rewrite-visual-portions/format using a LLM chain to chain multiple prompts together).
It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to have modified Sokolov et al’s ability to iteratively select a first portion of textual content and select a first visual rewrite format and iteratively select any subsequent particular second portion of the textual content (as well as process corresponding textual input(s) to the particular portion(s)), such that the first visual format is based on each of the iteratively selected particular portion(s) and is based on a word not in a particular portion (such as first), as taught by Cai. The combination would have allowed Sokolov et al to have made it easier to interactive and utilize LLMs to address and solve multi-task problems.
With regards to claim 7. The non-transitory computer readable medium of claim 6, Sokolov et al teaches wherein the operations further comprise analyzing the first textual input to identify the particular word, as similarly explained in the rejection of claim 1, Sokolov et al in Fig. 2C ref 155, Fig 3A-3D paragraphs 0055, 0056, 0061 and 0062: the first textual input having words can be analyzed when the user provides the textual input as parameters, and is rejected under similar rationale.
With regards to claim 8. The non-transitory computer readable medium of claim 1, the combination of Sokolov et al and Cai teaches wherein the selection of the second portion is further based on a particular word of the textual content not included in the second portion (as similarly explained in the rejection of claim 1, Cai was explained to show in Fig. 1A, Fig 1B, column 7, lines 33-67 and column 8, lines 1-15: the second portion is selected based on the word ‘friendly’ not included in the second portion (as it is based on the second textual input having the word ‘friendly’), and is rejected under similar rationale.
With regards to claim 9. The non-transitory computer readable medium of claim 8, the combination of Sokolov et al and Cai teaches wherein the operations further comprise analyzing the second textual input to identify the particular word (as similarly explained in the rejection of claim 1, Cai was explained to show in Fig. 1A, Fig 1B, column 7, lines 33-67 and column 8, lines 1-15: the second portion is selected based on the word ‘friendly’ not included in the second portion (as it is based on the second textual input having the word ‘friendly’), and is rejected under similar rationale.
With regards to claim 11. The non-transitory computer readable medium of claim 1, the combination of Sokolov et al and Cai teaches wherein the operations further comprise: generating a textual output in the natural language, the textual output refers to the first visual format and the second visual format; and presenting the textual output to the individual (as similarly explained in the rejection of claim 1, Sokolov et al can apply visual formatting/rewriting to a plurality of portions (such as first or second visual rewrites) and the combination of Sokolov et al and Cai teaches the rewrite(s)/visual-format(s) are applied and presented to the user), and is rejected under similar rationale.
With regards to claim 12. The non-transitory computer readable medium of claim 1, the combination of Sokolov et al and Cai teaches wherein the operations further comprise: after altering the presentation of the textual content, receiving from the individual a third textual input in the natural language; analyzing the third textual input to select a third visual format; analyzing the third textual input to select a fourth portion of the textual content, wherein the fourth portion includes at least some but not all of the first portion, wherein the fourth portion includes at least some but not all of the second portion, and wherein the fourth portion includes at least some but not all of the third portion; and modifying the presentation of the textual content, wherein in the modified presentation all parts of the first portion not included in the fourth portion are presented in the first visual format, all parts of the second portion not included in the fourth portion are presented in the second visual format, all parts of the third portion not included in the fourth portion are presented in the initial visual format, and the fourth portion is presented in the third visual format ( as similarly explained in the rejection of claim 1, Sokolov et al’s ability to iteratively select any number of portions (such as first, second, third, fourth , etc) textual content and process any number of textual inputs (such as first, second, third , fourth, etc) to select (and output) a visual re-write format/style is modified (by the teachings of Cai) such that the each of one or more iteratively selected particular portion(s) (such as second, third , fourth, etc selected particular potions) are further based on their respective corresponding textual inputs (such as second, third, fourth corresponding textual inputs) and that each selective particular portion would have further included a word not included in its corresponding textual input, while its corresponding textual input would further have included a word not included in the particular selected portion), and is rejected under similar rationale
With regards to claim 15. The non-transitory computer readable medium of claim 1, the combination of Sokolov and Cai et al teaches wherein the operations further comprise using a machine learning model to analyze the second textual input to select the second portion of the textual content (as similarly explained in the rejection of claim 1, Sokolov et al was explained to teach in Fig. 2C ref 155, Fig 3A-3D paragraphs 0055, 0056, 0061 and 0062 that based on textual input, certain format such as tone/style, format /design or word-selection can be selected and applied to a particular select content of a second portion designated for rewrite), and is rejected under similar rationale.
With regards to claim 19, the combination of Sokolov and Cai et al teaches a system for facilitating visual formatting of text through natural language, the system comprising at least one processing unit configured to perform operations, the operations comprise: accessing a textual content in a natural language; presenting the textual content to an individual in an initial visual format; receiving from the individual a selection of a first portion of the textual content; receiving from the individual a first textual input in the natural language; analyzing the first textual input to select a first visual format; receiving from the individual a second textual input in the natural language; analyzing the second textual input to select a second portion of the textual content, wherein the second portion includes at least one word not included in the second textual input, and wherein the second textual input includes at least one word not included in the second portion; analyzing the second textual input to select a second visual format; and altering the presentation of the textual content, wherein in the altered presentation the first portion is presented in the first visual format, the second portion is presented in the second visual format, and a third portion of the textual content is presented in the initial visual format, as similarly explained in the rejection of claim 1, and is rejected under similar rationale.
With regards to claim 20, the combination of Sokolov and Cai et al teaches a method for facilitating visual formatting of text through natural language, the method comprising: accessing a textual content in a natural language; presenting the textual content to an individual in an initial visual format; receiving from the individual a selection of a first portion of the textual content; receiving from the individual a first textual input in the natural language; analyzing the first textual input to select a first visual format; receiving from the individual a second textual input in the natural language; analyzing the second textual input to select a second portion of the textual content, wherein the second portion includes at least one word not included in the second textual input, and wherein the second textual input includes at least one word not included in the second portion; analyzing the second textual input to select a second visual format; and altering the presentation of the textual content, wherein in the altered presentation the first portion is presented in the first visual format, the second portion is presented in the second visual format, and a third portion of the textual content is presented in the initial visual format, as similarly explained in the rejection of claim 1, and is rejected under similar rationale.
Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sokolov et al (US Application: US 20240303247, published: Sep. 12, 2024, filed: Mar. 6, 2023) in view of Cai et al (US Patent: 12141556, issued: Nov. 12, 2024, filed: Sep. 30, 2022) in view of Baughman et al (US Application: US 20220335224, published: Oct. 20, 2022, filed: Apr. 15, 2021).
With regards to claim 10. The non-transitory computer readable medium of claim 1, the combination of Sokolov et al and Cai et al teaches wherein the initial visual format, the first visual format and the second visual format, as similarly explained in the rejection of claim 1, and is rejected under similar rationale.
However the combination does not teach wherein the initial visual format, the first visual format and the second visual format differ in at least one of color, typeface or size.
Yet Baughman et al teaches wherein the initial visual format, the first visual format and the second visual format differ in at least one of color, typeface or size (paragraphs 0043, 0044: based on context of content within particular portions of content, the visual format can adaptively change (and the format can include change of font size).
It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to have modified Sokolov et al and Cai et al’s ability to apply visual formats to different portions of content within a document, such that subsequent iterations of visual formats applied can differ based on context, as taught by Baughman et al. The combination would have implemented an assistance mechanism to make presentation of text more effective and cohesive by taking into account content context.
Claim(s) 13 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sokolov et al (US Application: US 20240303247, published: Sep. 12, 2024, filed: Mar. 6, 2023) in view of Cai et al (US Patent: 12141556, issued: Nov. 12, 2024, filed: Sep. 30, 2022) in view of Yashunaga et al (“TopicEQ: A Joint Topic and Mathematical Equation Model for Scientific Texts”, publisher: AAAI, published: 2019, pages 7394-7401) .
With regards to claim 13. The non-transitory computer readable medium of claim 1, the combination of Sokolov et al and Cai et al teaches wherein the operations further comprise: … a first word of the first textual input; … a second word of the first textual input; … select the first visual format, as similarly explained in the rejection of claim 1 , and is rejected under similar rationale.
However the combination does not expressly teach … identifying a first mathematical object in a mathematical space, the first mathematical object corresponds to a first word of the first textual input; identifying a second mathematical object in a mathematical space, the second mathematical object corresponds to a second word of the first textual input; calculating a function of the first mathematical object and the second mathematical object to obtain a third mathematical object in the mathematical space; and using the third mathematical object to select the first visual format.
Yet Yashunaga et al teaches … identifying a first mathematical object in a mathematical space, the first mathematical object corresponds to a first word of the first textual input; identifying a second mathematical object in a mathematical space, the second mathematical object corresponds to a second word of the first textual input; calculating a function of the first mathematical object and the second mathematical object to obtain a third mathematical object in the mathematical space; and using the third mathematical object to select the first visual format (pages 7394, 7398, Table 4, Table 6: a mathematical space is identified using words from input text to derive a topic. For each topic recognized from word(s) in textual input one or more mathematical objects/variables are identified to be used with a calculating function/equation to obtain an output variable as a third mathematical object, and an entire generated equation is the visual format selected. It is noted for each different topic, a different generated equation (visual format) is selected for generation ).
It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to have modified Sokolov et al and Cai et al’s ability to identify and process words of textual input to obtain/select a desired visual format (i.e. a desired first or second visual format), such that the identification would have further included identifying first, second and third mathematical object(s) to select and generate a visual equation format (i.e. for a first or second visual format) as taught by Yashunaga et al. The combination would have allowed Sokolov et al and Cai et al to have automatically and efficiently correlated text to corresponding equations.
With regards to claim 14. The non-transitory computer readable medium of claim 1, the combination of Sokolov et al, Cai and Yashunaga et al teaches wherein the operations further comprise: identifying a first mathematical object in a mathematical space, the first mathematical object corresponds to a word of the first textual input; identifying a second mathematical object in the mathematical space, the second mathematical object corresponds to a word of the second textual input; calculating a function of the first mathematical object and the second mathematical object to obtain a third mathematical object in the mathematical space; and using the third mathematical object to select the second visual format, as similarly explained in the rejection of claim 13, and is rejected under similar rationale.
Claim(s) 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sokolov et al (US Application: US 20240303247, published: Sep. 12, 2024, filed: Mar. 6, 2023) in view of Cai et al (US Patent: 12141556, issued: Nov. 12, 2024, filed: Sep. 30, 2022) in view of Suzuki et al (US Application: US 20170351662, published: Dec. 7, 2017, filed: May 12, 2017).
With regards to claim 16. The non-transitory computer readable medium of claim 1, the combination of Sokolov et al and Cai et al teaches wherein the second textual input … and the selection of the second portion, as similarly explained in the rejection of claim 1, and is rejected under similar rationale.
However the combination does not expressly teach wherein the second textual input includes an adverb, and the selection of the second portion is based on the adverb.
Yet Suzuki et al teaches wherein the …. textual input includes an adverb, and the selection of the … portion is based on the adverb (paragraphs 0017, 0062, 0069, 0092, 0093: input text input can reference independent claim text and a representative term (adverb) is identified and a portion of dependent claim text is selected and referenced based on the representative term (adverb) ).
It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to have modified Sokolov et al and Cai et al’s ability to process textual input (second textual input) and use the textual input to correlate and identify/select a portion of textual content (second portion), such that the correlation would have been modified to identify and reference an adverb from the textual input to make the identification/selection, as taught by Suzuki et al. The combination would have allowed identification of important keywords between the input text and other text by taking into account relationships between input and other text.
Claim(s) 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sokolov et al (US Application: US 20240303247, published: Sep. 12, 2024, filed: Mar. 6, 2023) in view of Cai et al (US Patent: 12141556, issued: Nov. 12, 2024, filed: Sep. 30, 2022) in view of Zuo et al (US Application: US 20180018308, published: Jan. 18, 2018, filed: Jan. 7 , 2016) in view of Hutchinson (US Application: US 2021/0383489, published: Dec. 9, 2021, filed: Aug. 16, 2019).
With regards to claim 17. The non-transitory computer readable medium of claim 1, the combination of Sokolov et al and Cal et al teaches wherein the first textual input … , and the selection of the first visual format, as similarly explained in the rejection of claim 1, and is rejected under similar rationale.
However the combination does not expressly teach wherein the first textual input includes an adverb, and the selection of the first visual format is based on the adverb.
Yet Zuo et al teaches wherein the …. textual input includes a [verb], and the selection of the … visual format is based on the [verb] (Fig 8B: textual input is processed for parameters that include verbs (such as ‘Add’) and visual format is selected based on the verb/action (*paragraph numbering formatting) ).
It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to have modified Sokolov et al and Cai et al’s ability to process textual input (first textual input) and use the textual input to correlate and identify/select a visual format (first visual format), such that the visual format correlation would have been based on a verb in the textual input, as taught by Zuo et al. The combination would have allowed Sokolov et al and Cai et al to have allowed a user to have efficiently performed editing on a target based upon semantic analysis of input words.
However the combination of Sokolov et al, Cai et al and Zuo et al teaches processing textual input parameter(s) from user input commands to select a first visual format (where the textual parameter(s) include the combination does not expressly teach the textual input parameters includes an adverb.
Yet Hutchinson teaches user input commands have parameters that includes an adverb (paragraph 0060 and 0061, input commands can include adverbs (as well as other types of parts-of-speech that can include verbs)).
It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to have modified Sokolov et al, Cai et al and Zuo et al’s ability to process textual input parameters (such as parameters that include verb(s)) from user natural language commands (and selected a visual format based on the textual input parameter (verb), such that the parameters would have been further modified to support adverb in processed input commands, as taught by Hutchinson. The combination would have allowed efficient understanding and generation of actions on behalf of the person, given user input commands.
Claim(s) 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sokolov et al (US Application: US 20240303247, published: Sep. 12, 2024, filed: Mar. 6, 2023) in view of Cai et al (US Patent: 12141556, issued: Nov. 12, 2024, filed: Sep. 30, 2022) in view of Zuo et al (US Application: US 20180018308, published: Jan. 18, 2018, filed: Jan. 7 , 2016).
With regards to claim 18. The non-transitory computer readable medium of claim 1, the combination of Sokolov et al and Cai et al teaches wherein the first textual input … , and the selection of the first visual format, as similarly explained in the rejection of claim 1, and is rejected under similar rationale.
However the combination does not expressly teach wherein the first textual input includes a verb, and the selection of the first visual format is based on the verb.
Yet Zuo et al teaches wherein the first textual input includes a verb, and the selection of the first visual format is based on the verb (Fig 8B: textual input is processed for parameters that include verbs (such as ‘Add’) and visual format is selected based on the verb/action (*paragraph numbering formatting) ).
It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to have modified Sokolov et al and Cai et al’s ability to process textual input (first textual input) and use the textual input to correlate and identify/select a visual format (first visual format), such that the visual format correlation would have been based on a verb in the textual input, as taught by Zuo et al. The combination would have allowed Sokolov et al and Cai et al to have allowed a user to have efficiently performed editing on a target based upon semantic analysis of input words.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Religa et al (US Application: US 2023/0259713): This reference teaches inputting content and detected tone into a machine learning model and obtaining a rephrased content segment with modified tone.
Peleg et al (US Patent: 11636258): This reference teaches constructing textual output options based upon analysis of document text.
Pu et al (US Application: 2022/0284904): This reference teaches text editing using natural language input for assistance systems.
.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILSON W TSUI whose telephone number is (571)272-7596. The examiner can normally be reached Monday - Friday 9 am -6 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Queler can be reached at (571) 272-4140. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WILSON W TSUI/Primary Examiner, Art Unit 2172