DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
Claims 1-19 are pending.
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 11/30/23 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
The information disclosure statement filed 9/10/25 fails to comply with the provisions of 37 CFR 1.97, 1.98 and MPEP § 609 because an English translation has not been provided for the NPL. It has been placed in the application file, but the information referred to therein has not been considered as to the merits. Applicant is advised that the date of any re-submission of any item of information contained in this information disclosure statement or the submission of any missing element(s) will be the date of submission for purposes of determining compliance with the requirements based on the time of filing the statement, including all certification requirements for statements under 37 CFR 1.97(e). See MPEP § 609.05(a).
Drawings
The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they do not include the following reference sign(s) mentioned in the description: dialogue area image 190 in Para. 00194, line 21. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they include the following reference character(s) not mentioned in the description: 1000-2 in Fig. 10B. Corrected drawing sheets in compliance with 37 CFR 1.121(d), or amendment to the specification to add the reference character(s) in the description in compliance with 37 CFR 1.121(b) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Claim Objections
Claim 2 is objected to because of the following informalities: In line 2, “in order” should read –in an order–. Appropriate correction is required.
Claim 4 is objected to because of the following informalities: In line 2, “vertically scrolling order” should read –a vertically scrolling order–. Appropriate correction is required.
Claim 7 is objected to because of the following informalities:
In line 3, “the same location” should read –a same location–.
In line 8, “is starting” should read –starts–.
Appropriate correction is required.
Claim 10 is objected to because of the following informalities: In line 5, “sound effect” should read –a sound effect–. Appropriate correction is required.
Claim 19 is objected to because of the following informalities:
In line 2, “in order” should read –in an order–.
In line 7, “configured to,” should read –configured to :–. The comma should be a colon.
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-19 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1 recites the limitation "content" in line 3. It is unclear and indefinite if this is the same as the content previously recited in the claim. Claim 1 also recites the limitation “text information” in line 5. It is unclear and indefinite if this is the same as the text information previously recited in the claim.
Claim 2 depends on claim 1 and is therefore also rejected under 112(b).
Claim 3 depends on claim 2 and is therefore also rejected under 112(b).
Claims 4 and 5 depend on claim 3 and are therefore also rejected under 112(b).
Claim 6 depends on claim 5 and is therefore also rejected under 112(b).
Claim 7 depends on claim 6 and is therefore also rejected under 112(b).
Claim 8 recites the limitation “order” in line 8. It is unclear and indefinite if this is the same as the order previously recited in the claims.
Claim 9 recites the limitation "text" in line 5. It is unclear and indefinite if this is the same as the text previously recited in the claims. Claim 9 also recites the limitation “OCR” in line 5. It is unclear and indefinite if this is the same as the OCR previously recited in the claims.
Claim 10 recites the limitation “text” in line 3. It is unclear and indefinite if this is the same as the text previously recited in the claims.
Claim 11 depends on claim 2 and is therefore also rejected under 112(b).
Claim 12 depends on claim 11 and is therefore also rejected under 112(b).
Claim 13 depends on claim 2 and is therefore also rejected under 112(b).
Claim 14 depends on claim 13 and is therefore also rejected under 112(b).
Claim 15 recites the limitation "text" in line 4. It is unclear and indefinite if this is the same as the text previously recited in the claims. Claim 15 also recites the limitation “the updated content” in line 4. There is insufficient antecedent basis for this limitation in the claim.
Claim 16 recites the limitation "an utterer" in line 2. It is unclear and indefinite if this is the same as the utterer previously recited in the claims. Claim 16 also recites the limitation “speech bubble” in line 4. It is unclear and indefinite if this is the same as the speech bubble previously recited in the claims.
Claim 17 depends on claim 1 and is therefore also rejected under 112(b).
Claim 18 recites the limitation "content" in line 5. It is unclear and indefinite if this is the same as the content previously recited in the claim. Claim 18 also recites the limitation “text information” in line 7. It is unclear and indefinite if this is the same as the text information previously recited in the claim.
Claim 19 depends on claim 18 and is therefore also rejected under 112(b).
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-4, 11-15, and 17-18 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite a method, non-transitory computer-readable recording medium, and a system for providing text information associated with content. With respect to the analysis of claim 1 (claims 17 and 18 contain similar limitations):
Step 1:
With regard to Step 1, claim 1 is directed to a method; and therefore, the claim is directed to one of the statutory categories of inventions.
Step 2A, Prong One:
With regard to Step 2A, Prong One, the limitations in claim 1 “identifying content including an image; extracting text from the image included in the content; and providing text information including the extracted text as the text information associated with the content” as drafted, recite an abstract idea, such as a process that, under its broadest reasonable interpretation, covers performance of the limitation manually or in the mind by a human. That is, person can identify text and images of a comic, extract text from the image of the comic, and determine information about the extracted text (i.e., whether the text is a narration or sound effect) of the comic. These are concepts that fall under the grouping of abstract idea mental processes, i.e., a concept performed in the human mind, evaluation, judgment, and/or opinion of a human.
Step 2A, Prong Two:
The 2019 PEG defines the phrase “integration into a practical application” to require an additional element or a combination of additional elements in the claim to apply, rely on, or use the judicial exception. In the instant case, there are no additional steps/elements/limitations in the claims, with the exception of the following in the claims: “uploaded to a content server” in claim 1, “non-transitory computer-readable recording medium”, “processor”, and “computer system” in claim 17, and “computer system”, “at least one processor”, and “uploaded to a content server” in claim 18. The uploading is data gathering/data input. The content server, processor/at least one processor, computer system, and non-transitory computer-readable recording medium are generic computer components. These limitations are regarded as adding routine and conventional elements to perform the judicial exception, and do not apply into a practical application. Accordingly, the above-mentioned additional elements/limitations do not integrate the abstract idea into a practical application; and therefore, the claims recite an abstract idea.
Step 2B:
Because the claims fail under Step 2A, the claims are further evaluated under Step 2B. The claims herein do not include additional elements that are sufficient to amount to significantly more than the judicial exception, because as discussed above with respect to integration of the abstract idea into practical application, the additional elements/limitations to perform the steps, amount to no more than insignificant routine and conventional elements. Mere instructions to apply an exception using generic components cannot provide an inventive concept. Therefore, claims 1, 17, and 18 are not patent eligible.
Furthermore, with regard to claims 2-4 and 11-15 viewed individually, these additional steps, under their broadest reasonable interpretation, provide extra-solution activities to cover performance of the limitations as an abstract idea, and do not provide meaningful limitations to transform the abstract idea into a patent eligible application of the abstract idea such that the claims amount to significantly more than the abstract idea itself. Accordingly, they are not patent eligible.
However, claims 5-10, 16, and 19 provide meaningful limitations to transform the abstract idea into a patent eligible application of the abstract idea such that the claims amount to significantly more than the abstract idea itself. Accordingly, they are patent eligible.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1, 17, and 18 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Kwon (KR 20200113675 A, see provided machine translation).
Regarding claim 1, Kwon teaches, A method of providing text information associated with content, performed by a computer system, the method comprising (Para. 0008: method for generating a webtoon video in which the service server generates character information and dialogue information. Character information is generated by the service server extracting text displayed in speech ballons in each scene image and dialogue information is generated; Paras. 0017 and Para. 0056: the method of generating a webtoon video is executed by a computer):
identifying content including an image uploaded to a content server (Para. 0008: a webtoon image is received by the service server and the speaker and listener in each speech balloon is displayed);
extracting text from the image included in the content (Para. 0008: the service server extracts text displayed in the speech balloons included in each scene image);
and providing text information including the extracted text as the text information associated with the content (Para. 0008: dialogue information is generated using the extracted text displayed in speech balloons in each scene image; Para. 0013: generating the dialogue information using the speech balloons included in a scene image may include determining the order of the dialogue).
Regarding claim 17, Kwon teaches, A non-transitory computer-readable recording medium storing instructions that (Para. 0056: the method of generating a webtoon video is implemented as a program executed on a computer and stored in a medium), when executed by a processor (Para. 0057: the program is read by a processor (CPU) of the computer), cause the processor to perform the method of claim 1 (see claim 1 above) on the computer system (Paras. 0017 and Para. 0056: the method of generating a webtoon video is executed by a computer).
Regarding claim 18, Kwon teaches, A computer system for providing text information associated with content, the computer system comprising (Para. 0008: method for generating a webtoon video in which the service server generates character information and dialogue information. Character information is generated by the service server extracting text displayed in speech ballons in each scene image and dialogue information is generated; Paras. 0017 and Para. 0056: the method of generating a webtoon video is executed by a computer):
at least one processor configured to execute instructions readable by the computer system (Para. 0056: the method of generating a webtoon video is implemented as a program executed on a computer and stored in a medium; Para. 0057: the program is read by a processor (CPU) of the computer),
wherein the at least one processor is configured to identify content including an image uploaded to a content server (Para. 0008: a webtoon image is received by the service server and the speaker and listener in each speech balloon is displayed; Para. 0057: the program is read by a processor (CPU) of the computer and communicates with the server), to extract text from the image included in the content (Para. 0008: the service server extracts text displayed in the speech balloons included in each scene image), and to provide text information including the extracted text as the text information associated with the content (Para. 0008: dialogue information is generated using the extracted text displayed in speech balloons in each scene image; Para. 0013: generating the dialogue information using the speech balloons included in a scene image may include determining the order of the dialogue).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 2-5, 13-14, 16, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Kwon (KR 20200113675 A, see provided machine translation) in view of Kang (KR 102427651 B1, see provided machine translation).
Regarding claim 2, Kwon teaches the limitations as explained above in claim 1.
Kwon further teaches, The method of claim 1 (see claim 1 above), wherein the image includes (Para. 0008: method for generating a webtoon video in which the service server generates character information and dialogue information. Character information is generated by the service server extracting text displayed in speech ballons in each scene image and dialogue information is generated; Para. 0013: generating the dialogue information using the speech balloons included in a scene image may include determining the order of the dialogue; Paras. 0039-0040),
the extracted text is the dialogue extracted from the text included in the image (Para. 0008: method for generating a webtoon video in which the service server generates character information and dialogue information. Character information is generated by the service server extracting text displayed in speech ballons in each scene image and dialogue information is generated; Para. 0039: the service server may generate dialogue information by extracting text in a speech balloon using OCR),
and the text information includes each line of a plurality of lines included in the dialogue and order information of each line (Para. 0008: “the method for generating a webtoon video by converting a dialogue line”; Para. 0021: each line is covered into a voice corresponding to the speaker of each line; Para. 0013: generating the dialogue information using the speech balloons included in a scene image may include determining the order of the dialogue; Para. 0040: the order of dialogue is determined).
Kwon does not expressly disclose the following limitation: wherein the image includes a plurality of cuts of the content in order.
However, Kang teaches, wherein the image includes a plurality of cuts of the content in order (Paras. 0017-0018: the cut images of the comic book images are arranged in a designated number order (i.e., have a designated sequence number); Para. 0032: the cut image is sequentially arranged; Para. 0043).
It would have been obvious, before the effective filing date of the claimed invention, to one of ordinary skill in the art to combine the image including cuts of the content in order as taught by Kang with the method of Kwon in order to easily convert a comic book image into a webtoon scroll image (Kang, Abstract and Para. 0012). Therefore, one of ordinary skill in the art would be capable to have combined these elements claimed by known methods, and that in combination, each element merely performs the same function as it does separately. It is for at least the aforementioned that the Examiner has reached a conclusion of obviousness with respect to claim 2.
Regarding claim 3, the combination of Kwon and Kang teaches the limitations as explained above in claim 2.
The combination of Kwon and Kang further teaches, The method of claim 2 (see claim 2 above), wherein the extracting of the text comprises (Kwon, Para. 0008: method for generating a webtoon video in which the service server generates character information and dialogue information. Character information is generated by the service server extracting text displayed in speech ballons in each scene image and dialogue information is generated; Kwon, Para. 0039: the service server may generate dialogue information by extracting text in a speech balloon using OCR):
detecting the plurality of cuts in the image (Kang, Paras. 0017-0018: there are cut images of the comic book images, coordinates of the cut image are recognized by the outermost coordinate recognition unit, and a unique sequence number for each of the cut images is designated);
generating each cut image including each cut of the plurality of cuts (Kang, Para. 0034: the cut image is an image in which the cut drawn in the picture column is converted into an image file, and the cut image is stored and has a unique file name for each cut of the comic book; Kang, Para. 0048: the comic book image is separated into a cut image in which a sequence number is designated for each cut image);
and extracting text from cut images corresponding to the plurality of cuts (Kwon, Para. 0008: method for generating a webtoon video in which the service server generates character information and dialogue information. Character information is generated by the service server extracting text displayed in speech ballons in each scene image and dialogue information is generated; Kwon, Para. 0039: the service server may generate dialogue information by extracting text in a speech balloon using OCR; Kang, Para. 0048: the comic book image is separated into a cut image in which a sequence number is designated for each cut image and there is a speech bubble layer and dialogue layer in the cut image; Kang, Para. 0036: the dialogue layer corresponds to character data; Kang, Paras. 0017-0018: there are cut images of the comic book images, coordinates of the cut image are recognized by the outermost coordinate recognition unit, and a unique sequence number for each of the cut images is designated; Kang, Para. 0034: the cut image is an image in which the cut drawn in the picture column is converted into an image file, and the cut image is stored and has a unique file name for each cut of the comic book).
The proposed combination as well as the motivation for combining the Kwon and Kang references presented in the rejection of claim 2 apply to claim 3 and are incorporated herein by reference. Therefore, the method recited in claim 3 is met by Kwon and Kang.
Regarding claim 4, the combination of Kwon and Kang teaches the limitations as explained above in claim 3.
The combination of Kwon and Kang further teaches, The method of claim 3 (see claim 3 above), wherein the plurality of cuts are included in the image in vertically scrolling order (Kang, Paras. 0017-0018: there are cut images of the comic book images, coordinates of the cut image are recognized by the outermost coordinate recognition unit, and a unique sequence number/order for each of the cut images is designated; Kang, Para. 0043: the cut images are arranged in a designated order to be spaced apart by the interval set by the blank size setting unit; Kang, Para. 0042: the blank size on the scroll image corresponds to the vertical spacing between the cut images; Kang, Para. 0048: the webtoon scroll image is arranged in the cut image arrangement step; Kang, Paras. 0053 and 0055: the cut images are arranged in the designated order 1g; Kang, Figs. 7, 9, and 10: cut images 1b are arranged in a vertical order 1g and are separated by blank spaces 1f),
and the each cut image is configured to further include a blank area of a predetermined size above and below the each cut (Kang, Para. 0043: the cut images are arranged in a designated order to be spaced apart by the interval set by the blank size setting unit; Kang, Para. 0042: the blank size is set on the scroll image and corresponds to the vertical spacing between the cut images; Kang, Para. 0048: the blank size is set in the blank size setting step; Kang, As shown in Fig. 6, cut images 1b are separated by blank spaces 1f in which there are blank spaces 1f above and below cut images 1b; Kang, Figs. 8-10).
The proposed combination as well as the motivation for combining the Kwon and Kang references presented in the rejection of claim 3 apply to claim 4 and are incorporated herein by reference. Therefore, the method recited in claim 4 is met by Kwon and Kang.
Regarding claim 5, the combination of Kwon and Kang teaches the limitations as explained above in claim 3.
The combination of Kwon and Kang further teaches, The method of claim 3 (see claim 3 above), wherein the extracting of the text from the cut images comprises (Kwon, Para. 0008: method for generating a webtoon video in which the service server generates character information and dialogue information. Character information is generated by the service server extracting text displayed in speech ballons in each scene image and dialogue information is generated; Kwon, Para. 0039: the service server may generate dialogue information by extracting text in a speech balloon using OCR; Kang, Para. 0048: the comic book image is separated into a cut image in which a sequence number is designated for each cut image and there is a speech bubble layer and dialogue layer in the cut image; Kang, Para. 0036: the dialogue layer corresponds to character data; Kang, Paras. 0017-0018: there are cut images of the comic book images, coordinates of the cut image are recognized by the outermost coordinate recognition unit, and a unique sequence number for each of the cut images is designated; Kang, Para. 0034: the cut image is an image in which the cut drawn in the picture column is converted into an image file, and the cut image is stored and has a unique file name for each cut of the comic book):
detecting a dialogue area including the dialogue for the each cut image (Kwon, Para. 0008: “the service server extracts text displayed in one or more speech balloons included in each scene image to generate dialogue information”; Kwon, Para. 0013; Kwon, Para. 0040: when generating dialogue information, the order of dialogue is determined in which text in a speech bubble is extracted according to a preset direction; Kwon, Para. 0048: a speech balloon is extracted; Kang, Para. 0017: “a speech bubble layer overlapping unit for overlapping the speech bubble layer by designating coordinate values to the cut images so that the speech bubble layer is overlapped for each of the cut images; and a dialogue layer overlapping unit overlapping the dialogue layer on the speech bubble layer so as to be inserted into the speech bubble layer”; Kang, Fig. 11: speech balloon layer 1d includes dialogue layer 1e);
extracting text included in the dialogue area for each detected dialogue area using optical character recognition (OCR) (Kwon, Para. 0039: the service server extracts text displayed in one or more speech balloons included in each scene image to generate dialogue information and dialogue information is generated by extracting text in a speech bubble using OCR);
and generating the text information based on the text extracted for each detected dialogue area (Kwon, Para. 0008: dialogue information is generated using the extracted text displayed in speech balloons in each scene image; Kwon, Para. 0013: generating the dialogue information using the speech balloons included in a scene image may include determining the order of the dialogue),
wherein the dialogue area is an area including a speech bubble included in the image, an area including monologue or narration by an utterer or a character of the content, or an area including explanatory text of the content (Kwon, Fig. 3: the speech balloons contain text in the image; Note: the Examiner selects the speech bubble limitation),
and the text information includes, as the order information, information regarding from which dialogue area and from which cut the text extracted for each detected dialogue area is extracted (Kwon, Para. 0008: “the service server extracts text displayed in one or more speech balloons included in each scene image to generate dialogue information”; Kwon, Para. 0013: generating the dialogue information using the speech balloons included in a scene image may include determining the order of the dialogue; Kang, Para. 0036: dialogue layers 1e correspond to character data and are formed to include a unique label number; Kang, Para. 0055: cut images 1b are arranged in the designated order 1g; Kang, Para. 0056: there is overlapping of speech bubble layer 1d for each cut image 1b; Kang, Para. 0057: dialogue layer 1e is inserted into speech bubble layer 1d; Kang, Fig. 11: each cut image 1b contains an order 1g (i.e., sequential number). Order numbers 4 and 5 contain a speech bubble layer 1d and dialogue layer 1e).
The proposed combination as well as the motivation for combining the Kwon and Kang references presented in the rejection of claim 3 apply to claim 5 and are incorporated herein by reference. Therefore, the method recited in claim 5 is met by Kwon and Kang.
Regarding claim 13, the combination of Kwon and Kang teaches the limitations as explained above in claim 2.
The combination of Kwon and Kang further teaches, The method of claim 2 (see claim 2 above), wherein the providing of the text information comprises providing audio information corresponding to the text information to a consumer terminal in response to a request from the consumer terminal that consumes the content (Kwon, Para. 0034: the manager terminal requests the service server to generate a webtoon video for a specific webtoon and modifications by the administrator may be completed; Kwon, Para. 0046: “the manager checks the webtoon image in which the speaker and listener are displayed for each speech balloon through the manager terminal 200, and corrects the wrong part of the speaker or listener for a specific speech balloon… the administrator can directly inspect the webtoon video in which the voice of each character is properly reflected for each dialogue”; Kwon, Paras. 0051-0052: voice data is generated in which dialogue information of each speech balloon is converted into a voice corresponding to the speaker’s character. The voice data expresses the speaker’s emotion).
Regarding claim 14, the combination of Kwon and Kang teaches the limitations as explained above in claim 13.
The combination of Kwon and Kang further teaches, The method of claim 13 (see claim 13 above), wherein the providing of the text information comprises (Kwon, Para. 0008: “the method for generating a webtoon video by converting a dialogue line”; Kwon, Para. 0021: each line is covered into a voice corresponding to the speaker of each line; Kwon, Para. 0013: generating the dialogue information using the speech balloons included in a scene image may include determining the order of the dialogue; Kwon, Para. 0040: the order of dialogue is determined; Kwon, Paras. 0051-0052: voice data is generated in which dialogue information of each speech balloon is converted into a voice corresponding to the speaker’s character. The voice data expresses the speaker’s emotion):
calling the text information associated with the content in response to a request from the consumer terminal for viewing the content (Kwon: Para. 0008; Kwon, Para. 0034: the manager terminal requests the service server to generate a webtoon video for a specific webtoon and modifications by the administrator may be completed; Kwon, Para. 0037: the service server receives a request to generate a webtoon video from the manager terminal, the service server provides each scene, and one or more characters are extracted; Kwon, Para. 0046: “the manager checks the webtoon image in which the speaker and listener are displayed for each speech balloon through the manager terminal 200, and corrects the wrong part of the speaker or listener for a specific speech balloon… the administrator can directly inspect the webtoon video in which the voice of each character is properly reflected for each dialogue”; Kwon, Paras. 0051-0052: voice data is generated in which dialogue information of each speech balloon is converted into a voice corresponding to the speaker’s character. The voice data expresses the speaker’s emotion);
recognizing a cut that is being viewed by the consumer terminal among the plurality of cuts (Kang, Para. 0015: “display device for displaying the comic book image”; Kang, Para. 0025: display device such as a computer and smartphone; Kang, Paras. 0017-0018: there are cut images of the comic book images, coordinates of the cut image are recognized by the outermost coordinate recognition unit, and a unique sequence number for each of the cut images is designated; Kang, Para. 0034: the cut image is an image in which the cut drawn in the picture column is converted into an image file, and the cut image is stored and has a unique file name for each cut of the comic book; Kang, Para. 0048: the comic book image is separated into a cut image in which a sequence number is designated for each cut image and there is a speech bubble layer and dialogue layer in the cut image);
and outputting audio information corresponding to a part corresponding to the recognized cut in the text information using the consumer terminal (Kwon, Para. 0046: “the manager checks the webtoon image in which the speaker and listener are displayed for each speech balloon through the manager terminal 200, and corrects the wrong part of the speaker or listener for a specific speech balloon… the administrator can directly inspect the webtoon video in which the voice of each character is properly reflected for each dialogue”; Kwon, Paras. 0051-0052: voice data is generated in which dialogue information of each speech balloon is converted into a voice corresponding to the speaker’s character. The voice data expresses the speaker’s emotion; Kang, Para. 0015: “display device for displaying the comic book image”; Kang, Para. 0025: display device such as a computer and smartphone; Kang, Paras. 0017-0018: there are cut images of the comic book images, coordinates of the cut image are recognized by the outermost coordinate recognition unit, and a unique sequence number for each of the cut images is designated; Kang, Para. 0034: the cut image is an image in which the cut drawn in the picture column is converted into an image file, and the cut image is stored and has a unique file name for each cut of the comic book; Kang, Para. 0048: the comic book image is separated into a cut image in which a sequence number is designated for each cut image and there is a speech bubble layer and dialogue layer in the cut image).
The proposed combination as well as the motivation for combining the Kwon and Kang references presented in the rejection of claim 13 apply to claim 14 and are incorporated herein by reference. Therefore, the method recited in claim 14 is met by Kwon and Kang.
Regarding claim 16, the combination of Kwon and Kang teaches the limitations as explained above in claim 5.
The combination of Kwon and Kang further teaches, The method of claim 5 (see claim 5 above), further comprising: determining an utterer of the content that utters the text extracted for each detected dialogue area (Kwon, Para. 0008: “the service server extracts text displayed in one or more speech balloons included in each scene image to generate dialogue information.” The speaker and listener are recognized for each speech balloon; Kwon, Para. 0010: the speaker and listener are identified and may include identifying a character adjacent to the tail of one or more speech balloons included in each scene images as a speaker corresponding to the speech balloon; Kwon, Para. 0013; Kwon, Para. 0040: when generating dialogue information, the order of dialogue is determined in which text in a speech bubble is extracted according to a preset direction; Kwon, Paras. 0041-0044; Kwon, Para. 0048: a speech balloon is extracted), the utterer being determined based on at least one of an utterer image represented in association with a speech bubble corresponding to the detected dialogue area in the image (Kwon, Para. 0008: The speaker and listener are recognized for each speech balloon; Kwon, Para. 0010: the speaker and listener are identified and may include identifying a character adjacent to the tail of one or more speech balloons included in each scene images as a speaker corresponding to the speech balloon; Kwon, Paras. 0041-0044) and a color or a shape of the speech bubble corresponding to the detected dialogue area (Kwon, Para. 0014: the speaker’s emotional state is determined based on the shape of the speech balloon; Para. 0052; Note: the Examiner selects the shape limitation),
wherein the text information generated based on the text extracted for each detected dialogue area further includes information on the determined utterer (Kwon, Para. 0008: dialogue information is generated using the extracted text displayed in speech balloons in each scene image. The speaker and listener are recognized for each speech balloon and voice data is generated by converting the dialogue information of each speech bubble into a voice corresponding to the speaker’s character of each speech balloon; Kwon, Para. 0014: the speaker’s emotional state is determined and voice data is generated; Kwon, Paras. 0041-0044).
Regarding claim 19, Kwon teaches the limitations as explained above in claim 18.
Kwon further teaches, The computer system of claim 18 (see claim 18 above), wherein the image includes (Para. 0008: method for generating a webtoon video in which the service server generates character information and dialogue information. Character information is generated by the service server extracting text displayed in speech ballons in each scene image and dialogue information is generated; Para. 0013: generating the dialogue information using the speech balloons included in a scene image may include determining the order of the dialogue; Paras. 0039-0040),
the extracted text is the dialogue extracted from the text included in the image (Para. 0008: method for generating a webtoon video in which the service server generates character information and dialogue information. Character information is generated by the service server extracting text displayed in speech ballons in each scene image and dialogue information is generated; Para. 0039: the service server may generate dialogue information by extracting text in a speech balloon using OCR),
and the text information includes each line of a plurality of lines included in the dialogue and order information of each line (Para. 0008: “the method for generating a webtoon video by converting a dialogue line”; Para. 0021: each line is covered into a voice corresponding to the speaker of each line; Para. 0013: generating the dialogue information using the speech balloons included in a scene image may include determining the order of the dialogue; Para. 0040: the order of dialogue is determined),
and the at least one processor is configured to (Para. 0056: the method of generating a webtoon video is implemented as a program executed on a computer and stored in a medium; Para. 0057: the program is read by a processor (CPU) of the computer),
(Para. 0008: method for generating a webtoon video in which the service server generates character information and dialogue information. Character information is generated by the service server extracting text displayed in speech ballons in each scene image and dialogue information is generated; Para. 0039: the service server may generate dialogue information by extracting text in a speech balloon using OCR),
and detect a dialogue area including the dialogue for the each (Para. 0008: “the service server extracts text displayed in one or more speech balloons included in each scene image to generate dialogue information”; Para. 0013; Para. 0040: when generating dialogue information, the order of dialogue is determined in which text in a speech bubble is extracted according to a preset direction; Para. 0048: a speech balloon is extracted), extract text included in the dialogue area for each detected dialogue area using optical character recognition (OCR) (Para. 0039: the service server extracts text displayed in one or more speech balloons included in each scene image to generate dialogue information and dialogue information is generated by extracting text in a speech bubble using OCR), and generate the text information based on the text extracted for each detected dialogue area (Para. 0008: dialogue information is generated using the extracted text displayed in speech balloons in each scene image; Para. 0013: generating the dialogue information using the speech balloons included in a scene image may include determining the order of the dialogue),
wherein the dialogue area is an area including a speech bubble included in the image, an area including monologue or narration by an utterer or a character of the content, or an area including explanatory text of the content (Fig. 3: the speech balloons contain text in the image; Note: the Examiner selects the speech bubble limitation; Note: the Examiner selects the speech bubble limitation),
and the text information includes, as the order information, information regarding (Para. 0008: “the service server extracts text displayed in one or more speech balloons included in each scene image to generate dialogue information”; Para. 0013: generating the dialogue information using the speech balloons included in a scene image may include determining the order of the dialogue).
Kwon does not expressly disclose the following limitations: wherein the image includes a plurality of cuts of the content in order, detect the plurality of cuts in the image, generate each cut image including each cut of the plurality of cuts, cut images corresponding to the plurality of cuts, dialogue area including the dialogue for the each cut image, information regarding from which dialogue area and from which cut.
However, Kang teaches, wherein the image includes a plurality of cuts of the content in order (Paras. 0017-0018: the cut images of the comic book images are arranged in a designated number order (i.e., have a designated sequence number); Para. 0032: the cut image is sequentially arranged; Para. 0043),
detect the plurality of cuts in the image (Paras. 0017-0018: there are cut images of the comic book images, coordinates of the cut image are recognized by the outermost coordinate recognition unit, and a unique sequence number for each of the cut images is designated),
generate each cut image including each cut of the plurality of cuts (Para. 0034: the cut image is an image in which the cut drawn in the picture column is converted into an image file, and the cut image is stored and has a unique file name for each cut of the comic book; Para. 0048: the comic book image is separated into a cut image in which a sequence number is designated for each cut image),
cut images corresponding to the plurality of cuts (Para. 0034: the cut image is an image in which the cut drawn in the picture column is converted into an image file, and the cut image is stored and has a unique file name for each cut of the comic book; Para. 0048: the comic book image is separated into a cut image in which a sequence number is designated for each cut image),
dialogue area including the dialogue for the each cut image (Para. 0017: “a speech bubble layer overlapping unit for overlapping the speech bubble layer by designating coordinate values to the cut images so that the speech bubble layer is overlapped for each of the cut images; and a dialogue layer overlapping unit overlapping the dialogue layer on the speech bubble layer so as to be inserted into the speech bubble layer”; Fig. 11: speech balloon layer 1d includes dialogue layer 1e),
information regarding from which dialogue area and from which cut (Para. 0036: dialogue layers 1e correspond to character data and are formed to include a unique label number; Para. 0055: cut images 1b are arranged in the designated order 1g; Para. 0056: there is overlapping of speech bubble layer 1d for each cut image 1b; Para. 0057: dialogue layer 1e is inserted into speech bubble layer 1d; Fig. 11: each cut image 1b contains an order 1g (i.e., sequential number). Order numbers 4 and 5 contain a speech bubble layer 1d and dialogue layer 1e).
It would have been obvious, before the effective filing date of the claimed invention, to one of ordinary skill in the art to combine the image including cuts of the content in order, detecting the plurality of cuts in the image, generating each cut image including each cut of the plurality of cuts and cut images corresponding to the plurality of cuts, a dialogue area including the dialogue for the each cut image, and information regarding from which dialogue area and from which cut as taught by Kang with the method of Kwon in order to easily convert a comic book image into a webtoon scroll image (Kang, Abstract and Para. 0012). Therefore, one of ordinary skill in the art would be capable to have combined these elements claimed by known methods, and that in combination, each element merely performs the same function as it does separately. It is for at least the aforementioned that the Examiner has reached a conclusion of obviousness with respect to claim 19.
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Kwon (KR 20200113675 A, see provided machine translation) in view of Kang (KR 102427651 B1, see provided machine translation), and further in view of Ebata (US 2013/0283157 A1).
Regarding claim 6, the combination of Kwon and Kang teaches the limitations as explained above in claim 5.
The combination of Kwon and Kang does not expressly disclose the following limitation: wherein the order information further includes row information in a corresponding dialogue area of the text extracted for each detected dialogue area.
However, Ebata teaches, wherein the order information further includes row information in a corresponding dialogue area of the text extracted for each detected dialogue area (Para. 0061: text information includes the number of lines; Para. 00117: the content display control section reads out information of a speech balloon corresponding to an image region specified to be displayed, as well as a text attribute (the number of lines) included in text information; Para. 0138: “if the reading order of speech balloons is included in the information regarding a speech balloon, dialogues in the speech balloons are read aloud according to the order”; Fig. 9: text information (i.e., text in rows/lines) in the speech balloon is given a dialogue order).
It would have been obvious, before the effective filing date of the claimed invention, to one of ordinary skill in the art to combine the order information including row information in a dialogue area as taught by Ebata with the combined method of Kwon and Kang in order to read aloud speech balloons according to the order (Ebata, Para. 0138). Therefore, one of ordinary skill in the art would be capable to have combined these elements claimed by known methods, and that in combination, each element merely performs the same function as it does separately. It is for at least the aforementioned that the Examiner has reached a conclusion of obviousness with respect to claim 6.
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Kwon (KR 20200113675 A, see provided machine translation) in view of Kang (KR 102427651 B1, see provided machine translation), and further in view of Chon (KR 20220065540 A, see provided machine translation).
Regarding claim 9, the combination of Kwon and Kang teaches the limitations as explained above in claim 5.
The combination of Kwon and Kang does not expressly disclose the following limitations: wherein the extracting of the text from the cut images further comprises: generating a single integrated dialogue area image by integrating dialogue areas detected in the cut images; and extracting text included in a corresponding dialogue area using OCR for each dialogue area, for the dialogue areas included in the integrated dialogue area image.
However, Chon teaches, wherein the extracting of the text from the cut images further comprises (Abstract: text images are extracted from speech bubble images of comics or webtoons and the images are converted into text. A text extraction method includes identifying speech bubbles in the image, and identifying text in the speech bubble; Para. 0011: speech bubbles are recognized in each cartoon cut; Para. 0044: webtoon cuts and speech bubbles within the cut; Para. 0041; Para. 0064; Para. 0099; Paras. 0104-0109):
generating a single integrated dialogue area image by integrating dialogue areas detected in the cut images (Para. 0090: the text data of the same paragraphs are combined; see top right in Figure below in which text in the speech bubbles are combined:
PNG
media_image1.png
480
1038
media_image1.png
Greyscale
Note: The Examiner interprets the combining of the text in the speech bubble as an integrated dialogue area);
and extracting text included in a corresponding dialogue area using OCR for each dialogue area, for the dialogue areas included in the integrated dialogue area image (Abstract: text images are extracted from speech bubble images of comics or webtoons and the images are converted into text. A text extraction method includes identifying speech bubbles in the image, and identifying text in the speech bubble; Paras. 0016 and 0018: OCR is used in which the text area is divided and recognized; Para. 0090: the text data of the same paragraphs are combined; see top right in Figure above in which text in the speech bubbles are combined).
It would have been obvious, before the effective filing date of the claimed invention, to one of ordinary skill in the art to combine generating a single integrated dialogue area image by integrating dialogue areas detected in the cut images and extracting text included in a corresponding dialogue area using OCR for each dialogue area, for the dialogue areas included in the integrated dialogue area image as taught by Chon with the combined method of Kwon and Kang in order to develop a translation technology optimized for comics and webtoons (Chon, Para. 0004) and increase accuracy (Chon, Para. 0066). Therefore, one of ordinary skill in the art would be capable to have combined these elements claimed by known methods, and that in combination, each element merely performs the same function as it does separately. It is for at least the aforementioned that the Examiner has reached a conclusion of obviousness with respect to claim 9.
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Kwon (KR 20200113675 A, see provided machine translation) in view of Kang (KR 102427651 B1, see provided machine translation), and further in view of Xu et al. (US 2011/0093258 A1, hereinafter “Xu”).
Regarding claim 10, the combination of Kwon and Kang teaches the limitations as explained above in claim 5.
The combination of Kwon and Kang further teaches, The method of claim 5, wherein the detecting of the dialogue area comprises: detecting areas including text in the each cut image (Kwon, Para. 0008: “the service server extracts text displayed in one or more speech balloons included in each scene image to generate dialogue information”; Kwon, Para. 0013; Kwon, Para. 0040: when generating dialogue information, the order of dialogue is determined in which text in a speech bubble is extracted according to a preset direction; Kwon, Para. 0048: a speech balloon is extracted; Kang, Para. 0017: “a speech bubble layer overlapping unit for overlapping the speech bubble layer by designating coordinate values to the cut images so that the speech bubble layer is overlapped for each of the cut images; and a dialogue layer overlapping unit overlapping the dialogue layer on the speech bubble layer so as to be inserted into the speech bubble layer”; Kang, Fig. 11: speech balloon layer 1d includes dialogue layer 1e; Note: speech bubbles/balloons are areas including text);
identifying, from among the areas, a non-dialogue area that is an area including text corresponding to background of the each cut (Kwon, Paras. 0053-0054: extract text within the background of a scene image (i.e., not in the speech bubble); Kang, Paras. 0017-0018: cut images of the comic book images; Kang, Para. 0032: each cut image 1b), text representing sound effect of the content (Kang, Para. 0015: generating voice data by recognizing the extracted text as a sound effect; Kang, Para. 0053: text is recognized as a sound effect),
and detecting areas excluding the non-dialogue area among the areas as the dialogue area including the dialogue (Kwon, Para. 0008: “the service server extracts text displayed in one or more speech balloons included in each scene image to generate dialogue information”; Kwon, Para. 0013; Kwon, Para. 0038: text displayed in one or more speech balloons is extracted; Kwon, Para. 0040: when generating dialogue information, the order of dialogue is determined in which text in a speech bubble is extracted according to a preset direction; Note: since the text in the speech bubble is extracted, text in the speech bubble can be detected as the dialogue area (i.e., not the background/non-dialogue area)).
The combination of Kwon and Kang does not expressly disclose the following limitation: and text determined to be unrelated to a story in the content.
However, Xu teaches, and text determined to be unrelated to a story in the content (Para. 0044: a sentence may be unwanted/a bad sentence if it is unrelated to the story).
It would have been obvious, before the effective filing date of the claimed invention, to one of ordinary skill in the art to combine determining text unrelated to a story as taught by Xu with the combined method of Kwon and Kang in order to remove text to clean unwanted text from documents (Xu, Para. 0003). Therefore, one of ordinary skill in the art would be capable to have combined these elements claimed by known methods, and that in combination, each element merely performs the same function as it does separately. It is for at least the aforementioned that the Examiner has reached a conclusion of obviousness with respect to claim 10.
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Kwon (KR 20200113675 A, see provided machine translation) in view of Kang (KR 102427651 B1, see provided machine translation), and further in view of Tahira (JP 2009201765A, see provided machine translation).
Regarding claim 11, the combination of Kwon and Kang teaches the limitations as explained above in claim 2.
The combination of Kwon and Kang further teaches, The method of claim 2 (see claim 2 above), wherein the providing of the text information comprises providing the text information to an administrator terminal in response to a request from the administrator terminal that manages the content, and further comprises providing a function that enables inspection of the text information for the administrator terminal (Kwon, Para. 0008; Kwon, Para. 0034: the manager terminal requests the service server to generate a webtoon video for a specific webtoon and modifications by the administrator may be completed; Kwon, Para. 0037: the service server receives a request to generate a webtoon video from the manager terminal, the service server provides each scene, and one or more characters are extracted; Kwon, Para. 0040: the order of dialogue may be a direction requested and set by the manager terminal; Kwon, Para. 0046: “the manager checks the webtoon image in which the speaker and listener are displayed for each speech balloon through the manager terminal 200, and corrects the wrong part of the speaker or listener for a specific speech balloon… the administrator can directly inspect the webtoon video in which the voice of each character is properly reflected for each dialogue”; Kwon, Paras. 0051-0052: after the correction, voice data is generated in which dialogue information of each speech balloon is converted into a voice corresponding to the speaker’s character. The voice data expresses the speaker’s emotion),
and the function that enables the inspection includes at least one of a first function capable of editing the text information (Kwon, Para. 0034: the manager terminal requests the service server to generate a webtoon video for a specific webtoon and modifications by the administrator may be completed; Kwon, Para. 0037: the service server receives a request to generate a webtoon video from the manager terminal, the service server provides each scene, and one or more characters are extracted; Kwon, Para. 0040: the order of dialogue may be a direction requested and set by the manager terminal; Kwon, Para. 0046: “the manager checks the webtoon image in which the speaker and listener are displayed for each speech balloon through the manager terminal 200, and corrects the wrong part of the speaker or listener for a specific speech balloon… the administrator can directly inspect the webtoon video in which the voice of each character is properly reflected for each dialogue”; Kwon, Paras. 0051-0052: after the correction, voice data is generated in which dialogue information of each speech balloon is converted into a voice corresponding to the speaker’s character. The voice data expresses the speaker’s emotion; Note: the Examiner interprets setting/modifying/correcting as editing), a second function capable of downloading the text information (Kwon, Para. 0002: downloading a digital webtoon through a smartphone; Kwon, Para. 0033: the manager terminal is a smartphone, etc. which installs an application provided by the service server or connects to the web to receive a webtoon video; Para. 0008: method for generating a webtoon video in which the service server generates character information and dialogue information. Character information is generated by the service server extracting text displayed in speech ballons in each scene image and dialogue information is generated. Dialogue information is generated using the extracted text displayed in speech balloons in each scene image; Kwon, Para. 0013: generating the dialogue information using the speech balloons included in a scene image may include determining the order of the dialogue),
The combination of Kwon and Kang does not expressly disclose the following limitation: and a third function for setting an update availability status of the text information.
However, Tahira teaches, and a third function for setting an update availability status of the text information (Para. 0049: control of the storing of player data from the communication content server, including player word data; Para. 0072: update the data; Para. 0088: updating data based on registration of the word data; Para. 0092: dialogue data and word data are selected; Para. 0099: updates and storing occurs every time the application is activated; Paras. 0106-0107: displaying the selected word in a balloon).
It would have been obvious, before the effective filing date of the claimed invention, to one of ordinary skill in the art to combine setting an update availability status as taught by Tahira with the combined method of Kwon and Kang in order to allow a character to perform an action that is surprising to the player (Tahira, Para. 0005). Therefore, one of ordinary skill in the art would be capable to have combined these elements claimed by known methods, and that in combination, each element merely performs the same function as it does separately. It is for at least the aforementioned that the Examiner has reached a conclusion of obviousness with respect to claim 11.
Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Kwon (KR 20200113675 A, see provided machine translation) in view of Kang (KR 102427651 B1, see provided machine translation), and further in view of Tahira (JP 2009201765A, see provided machine translation) and Lee (US 2015/0121255 A1).
Regarding claim 12, the combination of Kwon, Kang, and Tahira teaches the limitations as explained above in claim 11.
The combination of Kwon, Kang, and Tahira further teaches, The method of claim 11 (see claim 11 above), wherein the function that enables the inspection includes the first function (Kwon, Para. 0034: the manager terminal requests the service server to generate a webtoon video for a specific webtoon and modifications by the administrator may be completed; Kwon, Para. 0037: the service server receives a request to generate a webtoon video from the manager terminal, the service server provides each scene, and one or more characters are extracted; Kwon, Para. 0040: the order of dialogue may be a direction requested and set by the manager terminal; Kwon, Para. 0046: “the manager checks the webtoon image in which the speaker and listener are displayed for each speech balloon through the manager terminal 200, and corrects the wrong part of the speaker or listener for a specific speech balloon… the administrator can directly inspect the webtoon video in which the voice of each character is properly reflected for each dialogue”; Kwon, Paras. 0051-0052: after the correction, voice data is generated in which dialogue information of each speech balloon is converted into a voice corresponding to the speaker’s character. The voice data expresses the speaker’s emotion),
and the providing of the function that enables the inspection comprises: displaying the text information that includes a first cut selected by the administrator from among the plurality of cuts and dialogue extracted from the selected first cut on the administrator terminal (Kwon, Para. 0008: text displayed in one or more speech balloons is extracted; Kwon, Para. 0032; Kwon, Para. 0034: the manager terminal requests the service server to generate a webtoon video for a specific webtoon and modifications by the administrator may be completed; Kwon, Para. 0037: the service server receives a request to generate a webtoon video from the manager terminal, the service server provides each scene, and one or more characters are extracted; Kwon, Para. 0039; Kwon, Para. 0040: the order of dialogue may be a direction requested and set by the manager terminal; Kwon, Para. 0046: “the manager checks the webtoon image in which the speaker and listener are displayed for each speech balloon through the manager terminal 200, and corrects the wrong part of the speaker or listener for a specific speech balloon… the administrator can directly inspect the webtoon video in which the voice of each character is properly reflected for each dialogue”; Kwon, Paras. 0051-0052: after the correction, voice data is generated in which dialogue information of each speech balloon is converted into a voice corresponding to the speaker’s character. The voice data expresses the speaker’s emotion; Kang, Paras. 0017-0018: cut images are arranged in a designated number order when receiving a sequence number designation command; Kang, Para. 0025: the terminal is an electronic device and is defined as a communicable device including a display device such as a smartphone; Kang, Para. 0034: each cut of the comic book; Kang, Para. 0036: dialogue layers 1e correspond to character data and are formed to include a unique label number; Kang, Para. 0048: the comic book image is separated into a cut image in which a sequence number is designated for each cut image; Kang, Fig. 11: speech balloon layer 1d includes dialogue layer 1e);
providing a first user interface for editing the displayed text information (Kwon, Para. 0008: text displayed in one or more speech balloons is extracted; Kwon, Para. 0032; Kwon, Para. 0034: the manager terminal requests the service server to generate a webtoon video for a specific webtoon and modifications by the administrator may be completed; Kwon, Para. 0037: the service server receives a request to generate a webtoon video from the manager terminal, the service server provides each scene, and one or more characters are extracted; Kwon, Para. 0040: the order of dialogue may be a direction requested and set by the manager terminal; Kwon, Para. 0046: “the manager checks the webtoon image in which the speaker and listener are displayed for each speech balloon through the manager terminal 200, and corrects the wrong part of the speaker or listener for a specific speech balloon… the administrator can directly inspect the webtoon video in which the voice of each character is properly reflected for each dialogue”; Kwon, Paras. 0051-0052: after the correction, voice data is generated in which dialogue information of each speech balloon is converted into a voice corresponding to the speaker’s character. The voice data expresses the speaker’s emotion; Kang, Paras. 0017-0018: cut images are arranged in a designated number order when receiving a sequence number designation command; Kang, Para. 0025: the terminal is an electronic device and is defined as a communicable device including a display device such as a smartphone);
The combination of Kwon, Kang, and Tahira does not expressly disclose the following limitation: and providing a second user interface for transition from the first cut to a second cut that is another cut among the plurality of cuts.
However, Lee teaches, and providing a second user interface for transition from the first cut to a second cut that is another cut among the plurality of cuts (Paras. 0127-0129: there are a plurality of cartoon cuts and the user may switch a current cartoon frame (i.e., cut) to the next cartoon frame on the screen; Figs. 17 and 18).
It would have been obvious, before the effective filing date of the claimed invention, to one of ordinary skill in the art to combine transitioning from a first cut to a second cut as taught by Lee with the combined method of Kwon, Kang, and Tahira in order to improve enjoyability of a chat displayed as a cartoon image (Lee, Para. 0013). Therefore, one of ordinary skill in the art would be capable to have combined these elements claimed by known methods, and that in combination, each element merely performs the same function as it does separately. It is for at least the aforementioned that the Examiner has reached a conclusion of obviousness with respect to claim 12.
Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Kwon (KR 20200113675 A, see provided machine translation) in view of Tahira (JP 2009201765A, see provided machine translation).
Regarding claim 15, Kwon teaches the limitations as explained above in claim 1.
Kwon does not expressly disclose the following limitations: further comprising: monitoring an update status and a deletion status of the content for the content server; extracting text from the image included in the updated content when an update of the content is identified; and deleting the text information associated with the content when a deletion of the content is identified.
However, Tahira teaches, further comprising: monitoring an update status and a deletion status of the content for the content server (Para. 0049: control of the storing of player data from the communication content server, including player word data; Para. 0072: update or delete the data; Para. 0073: deleting existing dialogue data and storing new dialogue data; Para. 0088: updating data based on registration of the word data);
extracting text from the image included in the updated content when an update of the content is identified (Para. 0088: updating data based on registration of the word data; Para. 0092: dialogue data and word data are selected; Paras. 0106-0107: displaying the selected word in a balloon; Note: the Examiner interprets selection as a type of extraction);
and deleting the text information associated with the content when a deletion of the content is identified (Para. 0015: word data used for the character’s dialogue in the game; Para. 0072: delete data; Para. 0073: control for deleting existing dialogue data (i.e., word data)).
It would have been obvious, before the effective filing date of the claimed invention, to one of ordinary skill in the art to combine monitoring an update and deletion status of the content for the content server, extracting text from the image included in the updated content, and deleting the text information associated with the content as taught by Tahira with the method of Kwon in order to allow a character to perform an action that is surprising to the player (Tahira, Para. 0005). Therefore, one of ordinary skill in the art would be capable to have combined these elements claimed by known methods, and that in combination, each element merely performs the same function as it does separately. It is for at least the aforementioned that the Examiner has reached a conclusion of obviousness with respect to claim 15.
Allowable Subject Matter
Claims 7-8 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Tobita (US 2012/0014619 A1)
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Daniella M. DiGuglielmo whose telephone number is (571)272-0183. The examiner can normally be reached Monday - Friday 8:00 AM - 4:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571)270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Daniella M. DiGuglielmo/Examiner, Art Unit 2666
/EMILY C TERRELL/Supervisory Patent Examiner, Art Unit 2666