DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1 recites “implementing a machine learning model…,” however, this limitation is not used. Nowhere in claim 1 does it say how a machine learning model is being used, nor does claim 1 say what is the input nor output of the ML model. The limitation only appears to disclose the data that the model is trained on (“trained in real-time, on one or more summaries… one or more content items… or user interaction data.”). Claim 1 does not appear to require what the model is used for. Claims 10 and 19 recite similar limitations.
Claim 1 recites “implementing a machine learning model comprising training data pre-trained, or trained in real-time, on one or more summaries… one or more content items… or user interaction data.” It is unclear whether “on one or more summaries… one or more content items… or user interaction historical data” is referring to “implementing a machine learning model,” or “training data pre-trained, or trained in real-time.” In other words, are the summaries, content items, and user interaction historical data input into the ML model, or are they the training data? Examiner interprets that the “on one or more summaries… one or more content items… or user interaction historical data” are training data. Claims 10 and 19 recite similar limitations.
Claim 1 recites “machine learning model comprising training data pre-trained, or trained in real-time.” It is unclear whether it is the training data that is “pre-trained/trained” in real-time? Or is the machine learning model “pretrained/trained” in real time? Clarification is required. Additionally, if it is training data that is pre-trained, how is training data pre-trained? Isn’t training data a component of training? How can training data be “pre-trained”? Claims 2, 10, 11, and 19 recite similar limitations.
Claim 1 recites “a same or similar type.” “Similar” is a relative term. What is similar to one person may not be similar to another person. The instant specification does not define how “similar” the types need to be. Clarification is required. Claims 9, 10, 18, and 19 recite similar limitations.
Claim 1 recites “automatically determining at least one suggested summary, of the at least one resource, tailored to the user in response to determining one or more interests or focuses of the user based in part on analyzing the user interaction historical data.” This limitation requires “the user interaction historical data.” However, the machine learning model is trained on summaries, OR content items, OR user interaction historical data. If the machine learning model is trained on summaries or content items, and not on user interaction historical data, then there would not be any user interaction historical data to be based upon. Then this limitation could not occur, because “… in response to determining… based in part on analyzing the user interaction historical data” does not occur. Clarification is required. Claims 10 and 19 recite similar limitations.
Claim 2 recites “the one or more users of the group.” Claim 1 introduces “one or more other users of a group.” It is unclear whether this “one or more users of the group” is the same group, or a different group, from the group introduced in claim 1. Claims 5, 11, 14 recite similar issues.
Claim 3 recites “analyzing the user interaction historical data associated with the second user.” Here, “analyzing the user interaction historical data” refers to the analyzing in claim 1 of a (first) user. “A second user” is not introduced until claim 3. Analyzing “the user interaction historical data associated with the second user” does not occur because the analyzing in claim 1 only includes user interaction historical data of a (first) user. In other words, there is no historical data associated with the second user; not introduced in claim 1, nor introduced in claim 3. Claim 12 recites similar issues.
Claims 2-9 are dependent claims, and inherit the 35 U.S.C. §112(b) rejections from independent claim 1.
Claims 11-18 are dependent claims, and inherit the 35 U.S.C. §112(b) rejections from independent claim 10.
Claim 20 is/are dependent claims, and inherit the 35 U.S.C. §112(b) rejections from independent claim 19.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 10, and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hwang et al., Patent Application Publication number US 20190042551 A1, (hereinafter “Hwang”), in view of Singhai et al., Patent Application Publication number US 20200053403 A1 (hereinafter “Singhai”).
Claim 1: Hwang teaches “A method comprising:
analyzing at least one resource, associated with a user, being input or captured by a user interface (i.e. electronic apparatus 100 may receive a command for summary of a document in step S510. To be specific, the electronic apparatus 100 may receive a user command to select a summary icon displayed on an area of a document [Hwang 0126, Fig. 5]),… ;
implementing a machine learning model comprising training data pre-trained, or trained in real-time (i.e. learning data may be data collected or tested by the learning unit 1310 or the manufacturer of the learning unit 1310 [Hwang 0200]), on
one or more summaries of resources as a same or similar type as the at least one resource,
one or more content items associated with content of the at least one resource (i.e. As illustrated in (c) of FIG. 7, when a plurality of documents are selected as documents to be summarized, the document summary apparatus 200 may generate a summary text based on words or sentences commonly present in the plurality of selected documents [Hwang 0148, Fig. 7c]… the model learning unit 1310-4 can train the summary unit to generate summary information using a plurality of documents. Specifically, the model learning unit 1310-4 can train the summary unit to generate the summary information based on words common to the words included in the plurality of documents [Hwang 0201]), or
user interaction historical data (i.e. obtain user history information related to the document in step S520. At this time, the user history information may include user profile information registered by the user, user use history information, document access path information [Hwang 0127, Fig. 5]… document summary apparatus 200 may summarize the document based on the user history information in step S540 [Hwang 0129, Fig. 5] note: step S540 teaches that the Document Summary Apparatus 200 generates a summary based on user history information. Thus, the document summary apparatus must be trained on user history data);
automatically determining at least one suggested summary, of the at least one resource, tailored to the user in response to determining one or more interests or focuses of the user based in part on analyzing the user interaction historical data (i.e. the document summary apparatus 200 may generate summarized information by summarizing the document based on the knowledge level of a user. Specifically, when the search history or the document check history related to the document to be summarized is large based on the use history information of the user, the document summary apparatus 200 may briefly summarize the basic contents of the document and generate summary information. As another example, the document summary apparatus 200 can generate summary information by determining the degree of interest in a document based on user profile information (for example, age, gender, etc.). Specifically, if the degree of interest in the document is high based on the user profile information, the document summary apparatus 200 can generate summary information to shorten the basic contents and summarize the detailed contents in a long time, and the document summary apparatus 200 can generate the summary information to summarize the basic contents to be long. As another example, the document summary apparatus 200 may generate the summary information by determining the user's current interest level based on the access path of the document. Specifically, when the document is accessed by chance during web browsing, the document summary apparatus 200 may generate summary information such that the degree of interest in the document is low and the basic content is summarized to be long. If the document is accessed during the verification of the related documents, the document summary apparatus 200 may determine that the interest of the document is high, so that the summary information can be shortly summarized and the new contents summarized to be long [Hwang 0129]); and
presenting, by a user interface or a display device, the at least one suggested summary of the at least one resource (i.e. The electronic apparatus may provide the transmitted summary information in step S560 [Hwang 0132, Fig. 5]… the electronic apparatus 100 may insert the summary information 50 at the point at which the user drag input of the first document 10 is indicated, as shown in (d) of FIG. 1 [Hwang 0052, Fig. 1]).”
Hwang is silent regarding “wherein the at least one resource is sharable among one or more other users of a group.”
Singhai teaches “analyzing at least one resource, associated with a user, being input or captured by a user interface, wherein the at least one resource is sharable among one or more other users of a group (i.e. the learning-based engagement system is capable of modifying a multi-step engagement strategy as it is deployed using machine learning… the learning-based engagement system is capable of optimizing performance of the exit condition among targeted client device users [Singhai 0021] note: a multi-step engagement strategy is input, in order to be modified by the machine learning system. Note2: multi-step engagement strategies are used, or shared among many users);
implementing a machine learning model comprising training data pre-trained, or trained in real-time, on
one or more summaries of resources as a same or similar type as the at least one resource,
one or more content items associated with content of the at least one resource, or
user interaction historical data (i.e. machine learning models that are trained using data describing historical user interactions [Singhai 0020]);”
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention/combination of Hwang to include the feature of having the ability to have a resource be shared as disclosed by Singhai.
One would have been motivated to do so, before the effective filing date of the invention because it provides the benefit where many other users can use the same resource, increasing collaboration, and decreases the resources needed to generate and store a resource for each user.
Claim 10: Hwang and Singhai teach an apparatus comprising: one or more processors; and at least one memory storing instructions, that when executed by the one or more processors (i.e. memory 120 is accessed by the processor 140 and read/write/modify/delete/update of data by the processor 140 can be performed [Hwang 0064]), cause the apparatus to perform operations corresponding to the method of claim 1; therefore, it is rejected under the same rationale.
Claim 19: Hwang and Singhai teach non-transitory computer-readable medium storing instructions that, when executed (i.e. memory 120 is accessed by the processor 140 and read/write/modify/delete/update of data by the processor 140 can be performed [Hwang 0064]), cause performance of operations corresponding to the method of claim 1; therefore, it is rejected under the same rationale.
Claims 2 and 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hwang, in view of Singhai, in view of Sharifi et al., Patent Application Publication number US 20180239495 A1 (hereinafter “Sharifi”).
Claim 2: Hwang and Singhai teach all the limitations of claim 1, above. Hwang and Singhai are silent regarding “further comprising: training the training data pre-trained, or in real-time, further on one or more determined topics or subjects associated with one or more communications of the one or more users of the group; and
the automatically determining the at least one suggested summary of the at least one resource tailored to the user further in response to determining the one or more determined topics or subjects.”
Sharifi teaches “further comprising:
training the training data pre-trained, or in real-time, further on one or more determined topics or subjects associated with one or more communications of the one or more users of the group (i.e. a machine learning classifier trained to identify topics from labeled training data. In various implementations, the machine learning model can be trained based on labeled training data comprising a plurality of messages that are labeled with topics [Sharifi 0050]); and
the automatically determining the at least one suggested summary of the at least one resource tailored to the user further in response to determining the one or more determined topics or subjects (i.e. the present disclosure is directed to automated techniques for… displaying the generated groups of messages with labels that summarize the messages, e.g., by identifying the topics of the messages in the groups [Sharifi 0046]).”
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention/combination of Hwang and Singhai to include the feature of having the ability to summarize communications as disclosed by Sharifi.
One would have been motivated to do so, before the effective filing date of the invention because it provides the benefit to summarize resources tailored to the user, which offers greater personalization, which increases user satisfaction.
Claim 11: Claim 11 is similar in content and in scope to claim 2, thus it is rejected under the same rationale.
Claims 3-6, 12-15 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hwang, in view of Singhai, in view of Tamayo, Patent Application Publication number US 20250208755 A1 (hereinafter “Tamayo”).
Claim 3: Hwang and Singhai teach all the limitations of claim 1, above. Hwang and Singhai are silent regarding “further comprising:
automatically determining at least one other suggested summary, of the at least one resource, tailored to a second user among the one or more other users of the group in response to determining one or more other interests or other focuses of the second user based in part on analyzing the user interaction historical data associated with the second user.”
Tamayo teaches “further comprising:
automatically determining at least one other suggested summary, of the at least one resource, tailored to a second user among the one or more other users of the group in response to determining one or more other interests or other focuses of the second user based in part on analyzing the user interaction historical data associated with the second user (i.e. a microphone and a camera may be used to generate activity data indicating the activity of the plurality of participants 104. In some embodiments, the activity data may be used to determine a level of engagement of a specific participant of the plurality of participants 104. When the specific participant's level of engagement drops below a threshold level, the continuity break module 116 may generate a participant-specific summary of the presentation content during the period of time in which the participant does not maintain a threshold level of engagement [Tamayo 0077]).”
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention/combination of Hwang and Singhai to include the feature of having the ability to customize summaries to more people as disclosed by Tamayo.
One would have been motivated to do so, before the effective filing date of the invention because it provides the benefit to summarize resources tailored more users, which offers greater personalization, which increases user satisfaction.
Claim 4: Hwang and Singhai and Tamayo teach all the limitations of claim 3, above. Tamayo teaches “wherein:
content items of the at least one suggested summary of the at least one resource and items of content of the at least one other suggested summary of the at least one resource are different (i.e. a microphone and a camera may be used to generate activity data indicating the activity of the plurality of participants 104. In some embodiments, the activity data may be used to determine a level of engagement of a specific participant of the plurality of participants 104. When the specific participant's level of engagement drops below a threshold level, the continuity break module 116 may generate a participant-specific summary of the presentation content during the period of time in which the participant does not maintain a threshold level of engagement [Tamayo 0077] note: different participants with different levels of engagement would receive different summaries).”
One would have been motivated to combine Hwang and Singhai and Tamayo, before the effective filing date of the invention because it provides the benefit to summarize resources tailored more users, which offers greater personalization, which increases user satisfaction.
Claim 5: Hwang and Singhai teach all the limitations of claim 1, above. Hwang and Singhai are silent regarding “further comprising:
performing the automatically determining the at least one suggested summary of the at least one resource in an instance in which the user interface captures input of the at least one resource to be shared with the one or more users of the group.”
Tamayo teaches “further comprising:
performing the automatically determining the at least one suggested summary of the at least one resource in an instance in which the user interface captures input of the at least one resource to be shared with the one or more users of the group (i.e. a microphone and a camera may be used to generate activity data indicating the activity of the plurality of participants 104. In some embodiments, the activity data may be used to determine a level of engagement of a specific participant of the plurality of participants 104. When the specific participant's level of engagement drops below a threshold level, the continuity break module 116 may generate a participant-specific summary of the presentation content during the period of time in which the participant does not maintain a threshold level of engagement [Tamayo 0077]… user-specific input data from each of the plurality of participants… may be… shared through the event engagement application 118. In some embodiments, the user-specific input data may be… participant “likes,”… participant poll or query responses [Tamayo 0065] note: the camera captures input. Participant is in a recording of an event, and the participant’s inputs are shared to other users).”
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention/combination of Hwang and Singhai to include the feature of having the ability to capture input as disclosed by Tamayo.
One would have been motivated to do so, before the effective filing date of the invention because it provides the benefit to accept input “live” or as it is occurring, which increases the types of input that can be summarized, which increases the feature-set of a summary generator.
Claim 6: Hwang and Singhai and Tamayo teach all the limitations of claim 5, above. Tamayo teaches “wherein:
the user interface captures the input comprises one or more of an upload of the at least one resource to the user interface by the user (i.e. each slide from, for example, PowerPoint or Keynote or other similar program, can be shared in real time from the at least one presenter computing device 108 to each of the at least one participant device 102 at a presentation location… the plurality of participants 104 to see the slides on the plurality of participant computing devices 102 in the same order as the presenter, for example, projects them (synchronous mode) [Tamayo 0053] note: slides are uploaded to other participants) or an option to post or publish the at least one resource by the user interface in response to the user interface detecting the at least one resource.”
One would have been motivated to combine Hwang and Singhai and Tamayo, before the effective filing date of the invention because it provides the benefit to share the input to other users, which increases collaboration and other social features.
Claim 12: Claim 12 is similar in content and in scope to claim 3, thus it is rejected under the same rationale.
Claim 13: Claim 13 is similar in content and in scope to claim 4, thus it is rejected under the same rationale.
Claim 14: Claim 14 is similar in content and in scope to claim 5, thus it is rejected under the same rationale.
Claim 15: Claim 15 is similar in content and in scope to claim 6, thus it is rejected under the same rationale.
Claim 20: Claim 20 is similar in content and in scope to claim 3, thus it is rejected under the same rationale.
Claims 7-8 and 16-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hwang, in view of Singhai, in view of Modani et al., Patent Application Publication number US 20180011931 A1 (hereinafter “Modani”).
Claim 7: Hwang and Singhai teach all the limitations of claim 1, above. Hwang and Singhai are silent regarding “further comprising:
generating an alternative summary of the at least one resource tailored to the user based on determining at least one of a writing behavior of the user, a typing behavior of the user, a style of the user or a tone of the user.”
Modani teaches “further comprising:
generating an alternative summary of the at least one resource tailored to the user based on determining at least one of a writing behavior of the user, a typing behavior of the user, a style of the user or a tone of the user (i.e. feedback analyzer 122 learns criteria corresponding to which topics a user prefers in summaries using a machine learning model and boost text units corresponding to the preferred topics. As another example, the criteria could be a preferred writing style of the user [Modani 0080]).”
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention/combination of Hwang and Singhai to include the feature of having the ability to copy the user’s writing style as disclosed by Modani.
One would have been motivated to do so, before the effective filing date of the invention because it provides the benefit to further personalize summaries for a user, which increases familiarity, which increases the likelihood the user will read the generated content.
Claim 8: Hwang and Singhai and Modani teach all the limitations of claim 7, above. Modani teaches “wherein:
at least one format or one or more content items of the at least one suggested summary of the at least one resource is different from at least a second format or one or more data items of the alternative summary of the at least one resource (i.e. feedback analyzer 122 learns criteria corresponding to which topics a user prefers in summaries using a machine learning model and boost text units corresponding to the preferred topics. As another example, the criteria could be a preferred writing style of the user (e.g., a business style, a conversational style, a particular style of sentence structure, a use of cliché s, a use of tense, a level of sophistication, a style directed to a particular audience, or demographic, or other features of a text unit that can be evaluated by text unit analyzer 116) [Modani 0080] note: the summaries are different, so the data items and format of the summaries are also different).”
One would have been motivated to combine Hwang and Singhai and Modani, before the effective filing date of the invention because it provides the benefit of a different look and feel when presenting different summaries, which increases the user’s awareness that the summaries are different, which helps the user differentiate which summaries the user intends to read.
Claim 16: Claim 16 is similar in content and in scope to claim 7, thus it is rejected under the same rationale.
Claim 17: Claim 17 is similar in content and in scope to claim 8, thus it is rejected under the same rationale.
Claims 9 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hwang, in view of Singhai, in view of de Oliveira et al., Patent Application Publication number US 20220067269 A1, (hereinafter “de Oliveira”), in view of Hulten et al., Patent Application Publication number US 20090187988 A1 (hereinafter “Hulten”).
Claim 9: Hwang and Singhai teach all the limitations of claim 1, above. Hwang and Singhai are silent regarding “further comprising:
performing the automatically determining the at least one suggested summary of the at least one resource based on determining that the at least one resource comprises a same or similar type of resource as a corresponding resource associated with, or within, the training data,”
de Oliveira teaches “further comprising:
performing the automatically determining the at least one suggested summary of the at least one resource based on determining that the at least one resource comprises a same or similar type of resource as a corresponding resource associated with, or within, the training data (i.e. the same language model that is being used to understand the source (e.g., the training data) is also used to generate the summary. This is especially useful when there are aligned domains, that is, corpus of documents that are similar, like conversations between two people in a support-center context. This means that the language that is being used in the conversation is probably very similar to what will be outputted in the summary, with similar words and similar phrases [de Oliveira 0038]),”
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention/combination of Hwang and Singhai to include the feature of having the ability to generate summaries with similar data as training data as disclosed by de Oliveira.
One would have been motivated to do so, before the effective filing date of the invention because it provides the benefit of having more accurate summaries generated, which increases user satisfaction.
Hwang and Singhai and de Oliveira are silent regarding “wherein the user interaction historical data is obtained during a predetermined time period.”
Hulten teaches “wherein the user interaction historical data is obtained during a predetermined time period (i.e. a training set can be built on historical user interaction data by observing users previous transactions (say from last month) [Hulten 0024]).”
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention/combination of Hwang and Singhai and de Oliveira to include the feature of having the ability to obtain historical data over a time period as disclosed by Hulten.
One would have been motivated to do so, before the effective filing date of the invention because it provides the benefit to further customize generated content to specific periods of user interaction, which allows specific user interactions to be dialed in and trained by the model(s).
Claim 18: Claim 18 is similar in content and in scope to claim 9, thus it is rejected under the same rationale.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Ingel (US 20250006182 A1) listed on 892 is related to generating custom summaries based on historical user interactions.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SAMUEL SHEN whose telephone number is (469)295-9169 and email address is samuel.shen@uspto.gov. The examiner can normally be reached Monday-Thursday, 7:00 am - 5:00 pm CT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Fred Ehichioya can be reached on (571) 272-4034. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/S.S./Examiner, Art Unit 2179
/IRETE F EHICHIOYA/Supervisory Patent Examiner, Art Unit 2179