Prosecution Insights
Last updated: April 19, 2026
Application No. 18/443,012

COMPUTER-BASED SYSTEMS INVOLVING MACHINE LEARNING ASSOCIATED WITH GENERATION OF PREDICTIVE CONTENT FOR DATA STRUCTURE SEGMENTS AND METHODS OF USE THEREOF

Final Rejection §112§DP
Filed
Feb 15, 2024
Examiner
NGUYEN, QUYNH H
Art Unit
2693
Tech Center
2600 — Communications
Assignee
Capital One Services LLC
OA Round
2 (Final)
87%
Grant Probability
Favorable
3-4
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
941 granted / 1078 resolved
+25.3% vs TC avg
Strong +17% interview lift
Without
With
+17.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
29 currently pending
Career history
1107
Total Applications
across all art units

Statute-Specific Performance

§101
18.6%
-21.4% vs TC avg
§103
42.7%
+2.7% vs TC avg
§102
7.4%
-32.6% vs TC avg
§112
10.3%
-29.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1078 resolved cases

Office Action

§112 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Election/Restrictions 1. Applicant’s election without traverse of claims 1-13 and 22-27 in the reply filed on 10/23/25 is acknowledged. Claim 21 withdrawn from further consideration pursuant to 37 CFR 1.142(b) as being drawn to a nonelected. Claim 21 should be canceled. Claim Rejections - 35 USC § 112 2. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 1 recites the limitation "the first set of subcomponents and the second set of components" in lines 18-19. There is insufficient antecedent basis for this limitation in the claim. Claim 22 recites the limitation "the first set of subcomponents and the second set of components" in lines 21-22. There is insufficient antecedent basis for this limitation in the claim. Double Patenting 3. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. 4. Claims 1-13 and 22-27 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 11,907,664. Although the claims at issue are not identical, they are not patentably distinct from each other because all the claimed limitations recited in the present application are broader and transparently found in the U.S. Patent No. 11,907,664 with obvious wording variations. When claims in the pending application are broader than the ones in the patent, the broad claims in the pending application are rejected under obviousness type double patenting over previously patented narrow claims, In re Van Ornum and Stang, 214 USPQ 761. Also, omission of an element and its function in a combination is an obvious expedient if the remaining elements perform the same functions as before. In re KARLSON (CCPA) 136 USPA 184 (1963). U.S. Patent Application 18/443,012 U.S. Patent 11,907,664 1.A computer-implemented method comprising: 1.A computer-implemented method comprising: training, by at least one processor, a natural language generation (NLG) machine learning model on customer data derived from a plurality of customers; training, by at least one processor, a natural language generation (NLG) machine learning model on customer data derived from a plurality of customers; receiving, by the at least one processor, communications associated with the plurality of customers, wherein the communications comprised a plurality of customers messages; receiving, by the at least one processor, communications associated with the plurality of customers, the communications comprised a plurality of customers messages; determining, by the at least one processor, a message type, from among a plurality of message types, for each message of the communications, wherein the plurality of message types comprises a first message type and a second message type; wherein the communications comprise the first messages of the first message type and second messages of the second message type splitting, by the at least one processor, the first messages of the first message type into a first set of subcomponent text sections; splitting, by the at least one processor, the second messages of the second message type into a second set of subcomponent text sections; predicting, based on the plurality of customer messages in combination with conversion event data pertaining to the plurality of customer messages, a set of predicted customer preferences for the plurality of customer; predicting, based on the plurality of customer messages in combination with conversion event data pertaining to the plurality of customer messages, a set of predicted customer preferences for the plurality of customer; analyzing, by the at least one processor, via the NLG machine learning model, a first set and a second set in combination with the ser of predicted customer preferences to generate a plurality of semantic numerical scores comprising a predicted effectiveness for each subcomponent text section, wherein each respective semantic numerical score is based on an evaluation of each respective subcomponent text section in a respective semantic category of a plurality of semantic categories and the predicted effectiveness comprises predicted conversion rates for messages that follow the set of predicted customer preferences; wherein each subcomponent identified in the first set of subcomponents and the second set of components is mapped to a predetermined scoring framework to determine the plurality of semantic numerical scores by analyzing word embeddings representing at least one vector of numbers each subcomponent text section for feeding into the machine learning model, wherein the numeric score is a percentage in decimal form; analyzing, by the at least one processor, via the NLG machine learning model, the first set and the second set in combination with the ser of predicted customer preferences to generate a plurality of semantic numerical scores comprising a predicted effectiveness for each subcomponent text section, wherein each respective semantic numerical score is based on an evaluation of each respective subcomponent text section in a respective semantic category of a plurality of semantic categories and the predicted effectiveness comprises predicted conversion rates for messages that follow the set of predicted customer preferences; wherein the plurality of the semantic categories comprises at least three semantic categories are selected from a sentiment category, an emotion category, a perceived message type category, a semantic relatedness category, a feeling category, a tone category, a perception category, a micro structure category, and an emotional intelligence category; determining, by the at least one processor, at least one impactful semantic category for a target audience by selecting at least one semantic category corresponding to at least one semantic numerical score of at least one subcomponent text section of the first set or the second set that is equal to or higher than a first pre-determine threshold value; determining, by the at least one processor, at least one impactful semantic category for a target audience by selecting at least one semantic category corresponding to at least one semantic numerical score of at least one subcomponent text section of the first set or the second set that is equal to or higher than a first pre-determine threshold value; generating, by the at least one processor, via the NLG machine learning model, personalized textual content targeting the audience based on at least one unit of text having a corresponding semantic numerical score in the at least one impactful semantic category that is equal to or than a second pre-determined threshold value; and generating, by the at least one processor, via the NLG machine learning model, at least one personalized communication for transmission to an audience from a personalized textual content; and generating, by the at least one processor, via the NLG machine learning model, at least one personalized communication for transmission to an audience from a personalized textual content; and updating, by the at least one processor, the NLG machine learning model based on identified interactions between the audience and the at least one personalized communication. updating, by the at least one processor, the NLG machine learning model based on identified interactions between the audience and the at least one personalized communication. 2. The method of claim 1, wherein a first message type comprises emails and a second message type comprises SMS messages, push messages, and web banners. 2. The method of claim 1, wherein a first message type comprises emails and a second message type comprises SMS messages, push messages, and web banners. 3. The method of claim 1, wherein a first set of subcomponent text sections comprises 3 or more parts, including 3 or more of a subject line, a preheader, a banner image, an introductory section, and a call to action. 3. The method of claim 1, wherein a first set of subcomponent text sections comprises 3 or more parts, including 3 or more of a subject line, a preheader, a banner image, an introductory section, and a call to action. 4. The method of claim 1, wherein a first message type comprises email messages, and wherein a first set of subcomponent text sections comprises 3 or more parts, including 3 or more of a subject line, a preheader, a banner image, an introductory section, and a call to action. 4. The method of claim 1, wherein a first message type comprises email messages, and wherein a first set of subcomponent text sections comprises 3 or more parts, including 3 or more of a subject line, a preheader, a banner image, an introductory section, and a call to action. 5. The method of claim 1, wherein a second set of subcomponent text sections comprises 2 or more parts, including 2 or more of an introductory section, a body section, a value proposition, and an end section. 5. The method of claim 1, wherein a second set of subcomponent text sections comprises 2 or more parts, including 2 or more of an introductory section, a body section, a value proposition, and an end section. 6. The method of claim 1, wherein a second message type comprises 2 or more of SMS messages, push messages and/or web banners, and wherein a second set of subcomponent text sections comprises 2 or more parts, including 2 or more of an introductory section, a body section, a value proposition, an end section, and/or entire message. 6. The method of claim 1, wherein a second message type comprises 2 or more of SMS messages, push messages and/or web banners, and wherein a second set of subcomponent text sections comprises 2 or more parts, including 2 or more of an introductory section, a body section, a value proposition, an end section, and/or entire message. 7. The method of claim 1, wherein a sentiment category is comprised of 2 or more subcategories selected from positive, neutral, and negative. 7. The method of claim 1, wherein a sentiment category is comprised of 2 or more subcategories selected from positive, neutral, and negative. 8. The method of claim 1, wherein the emotion category is comprised of 2 or more, subcategories selected from a group composed of happiness, concern, excitement, sadness, candor and boredom. 8. The method of claim 1, wherein the emotion category is comprised of 2 or more, subcategories selected from a group composed of happiness, concern, excitement, sadness, candor and boredom. 9. The method of claim 1, wherein a perceived message type category is comprised of 2 or more subcategories selected from a group composed of news, feedback, query, marketing, and spam. 9. The method of claim 1, wherein a perceived message type category is comprised of 2 or more subcategories selected from a group composed of news, feedback, query, marketing, and spam. 10. The method of claim 1, wherein at least one personalized communication comprises a first portion of the personalized textual content that corresponds to the sentiment category determined to be impactful to the audience, a second portion of the personalized textual content that corresponds to the emotion category determined to be impactful to the audience, and a third portion of the personalized textual content that corresponds to the perceived message type category determined to be impactful to the audience. 10. The method of claim 1, wherein at least one personalized communication comprises a first portion of the personalized textual content that corresponds to the sentiment category determined to be impactful to the audience, a second portion of the personalized textual content that corresponds to the emotion category determined to be impactful to the audience, and a third portion of the personalized textual content that corresponds to the perceived message type category determined to be impactful to the audience. 11. The method of claim 1, wherein a semantic numerical score in a semantic relatedness category is set based on a percentage of the text that is determined to be semantically similar to a benchmark communication. 11. The method of claim 1, wherein a semantic numerical score in a semantic relatedness category is set based on a percentage of the text that is determined to be semantically similar to a benchmark communication. 12. The method of claim 1 further comprising: building a database of information regarding personalized messages that are impactful to each individual customer, wherein the database is built based on at least one prior personalized message that elicited a response from the individual customer. 12. The method of claim 1 further comprising: building a database of information regarding personalized messages that are impactful to each individual customer, wherein the database is built based on at least one prior personalized message that elicited a response from the individual customer. 13. The method of claim 1 further comprising: analyzing different textual components or text within at least one region of the at least one personalized communication to determine how the different textual component or text affect the semantic numerical scores; and changing different textual portions or text within the at least one region of the at least one personalized communication, prior to sending to the audience, to generate a personalized communication that is determined to be potentially impactful to the audience via an increase in the semantic numerical score. 13. The method of claim 1 further comprising: analyzing different textual components or text within at least one region of the at least one personalized communication to determine how the different textual component or text affect the semantic numerical scores; and/or changing different textual portions or text within the at least one region of the at least one personalized communication, prior to sending to the audience, to generate a personalized communication that is determined to be potentially impactful to the audience via an increase in the semantic numerical score. 22. A system comprising: one or more processors; and a memory in communication with the one or more processors and storing instructions that, when executed by the one or more processors, cause the one or more processors to: 14. A computer-implemented method comprising: train a natural language generation (NLG) machine learning model on customer data derived from a plurality of customers; training, by at least one processor, a natural language generation (NLG) machine learning model on customer data derived from a plurality of customers; receive communications associated with the plurality of customers, wherein the communications comprised a plurality of customers messages; receiving, by the at least one processor, communications associated with the plurality of customers, the communications comprised of customers messages; categorizing the customer messages that are emails as a first message type, and categorizing the customer messages that are SMS messages, push messages, and web banners as second message type; splitting, by the at least one processor, first messages of the first message type into a first set of subcomponent text sections; splitting, by the at least one processor, second messages of the second message type into a second set of subcomponent text sections; predict, based on the plurality of customer messages in combination with conversion event data pertaining to the plurality of customer messages, a set of predicted customer preferences for the plurality of customer; predicting, based on the plurality of customer messages in combination with conversion event data pertaining to the plurality of customer messages, a set of predicted customer preferences for the plurality of customer; analyze via the NLG machine learning model, a first set and a second set in combination with the set of predicted customer preferences to generate a plurality of semantic numerical scores comprising a predicted effectiveness for each subcomponent text section, wherein each respective semantic numerical score is based on an evaluation of each respective subcomponent text section in a respective semantic category of a plurality of semantic categories and the predicted effectiveness comprises predicted conversion rates for messages that follow the set of predicted customer preferences; and wherein each subcomponent identified in the first set of subcomponents and the second set of components is mapped to a predetermined scoring framework to determine the plurality of semantic numerical scores by analyzing word embeddings representing at least one vector of numbers each subcomponent text section for feeding into the machine learning model, wherein the numeric score is a percentage in decimal form; analyzing, by the at least one processor, via the NLG machine learning model, the first set and the second set in combination with the set of predicted customer preferences to generate a plurality of semantic numerical scores comprising a predicted effectiveness for each subcomponent text section, wherein each respective semantic numerical score is based on an evaluation of each respective subcomponent text section in a respective semantic category of a plurality of semantic categories and the predicted effectiveness comprises predicted conversion rates for messages that follow the set of predicted customer preferences; wherein the plurality of the semantic categories comprises at least three semantic categories are selected from a sentiment category, an emotion category, a perceived message type category, a semantic relatedness category, a feeling category, a tone category, a perception category, a micro structure category, and an emotional intelligence category; determine at least one impactful semantic category for a target audience by selecting at least one semantic category corresponding to at least one semantic numerical score of at least one subcomponent text section of the first set or the second set that is equal to or higher than a first pre-determine threshold value; determining, by the at least one processor, at least one impactful semantic category for a target audience by selecting at least one semantic category corresponding to at least one semantic numerical score of at least one subcomponent text section of the first set or the second set that is equal to or higher than a first pre-determine threshold value; generating, by the at least one processor, via the NLG machine learning model, personalized textual content targeting the audience based on at least one unit of text having a corresponding semantic numerical score in the at least one impactful semantic category that is equal to or than a second pre-determined threshold value; generate, via the NLG machine learning model, at least one personalized communication for transmission to an audience from a personalized textual content; and generate, by the at least one processor, via the NLG machine learning model, at least one personalized communication for transmission to an audience from a personalized textual content; and update the NLG machine learning model based on identified interactions between the audience and the at least one personalized communication. update, by the at least one processor, the NLG machine learning model based on identified interactions between the audience and the at least one personalized communication. 23. The method of claim 22, wherein a first set of subcomponent text sections comprises 3 or more parts, including 3 or more of a subject line, a preheader, a banner image, an introductory section, and a call to action. 15. The method of claim 14, wherein a first set of subcomponent text sections comprises 3 or more parts, including 3 or more of a subject line, a preheader, a banner image, an introductory section, and a call to action. 24. The method of claim 22, wherein a first message type comprises email messages, and wherein a first set of subcomponent text sections comprises 3 or more parts, including 3 or more of a subject line, a preheader, a banner image, an introductory section, and a call to action. 16. The method of claim 14, wherein a first message type comprises email messages, and wherein a first set of subcomponent text sections comprises 3 or more parts, including 3 or more of a subject line, a preheader, a banner image, an introductory section, and a call to action. 25. The method of claim 22, wherein a second set of subcomponent text sections comprises 2 or more parts, including 2 or more of an introductory section, a body section, a value proposition, and an end section. 17. The method of claim 14, wherein a second set of subcomponent text sections comprises 2 or more parts, including 2 or more of an introductory section, a body section, a value proposition, and an end section. 26. The method of claim 22, wherein a second message type comprises 2 or more of SMS messages, push messages and/or web banners, and wherein a second set of subcomponent text sections comprises 2 or more parts, including 2 or more of an introductory section, a body section, a value proposition, an end section, and/or entire message. 18. The method of claim 14, wherein a second message type comprises 2 or more of SMS messages, push messages and/or web banners, and wherein a second set of subcomponent text sections comprises 2 or more parts, including 2 or more of an introductory section, a body section, a value proposition, an end section, and/or entire message. 27. The method of claim 22, wherein a sentiment category is comprised of 2 or more subcategories selected from positive, neutral, and negative. 19. The method of claim 14, wherein a sentiment category is comprised of 2 or more subcategories selected from positive, neutral, and negative. 20. The method of claim 14, wherein the emotion category is comprises of 3 or more subcategories selected from a group composed of happiness, concern, excitement, sadness, candor and boredom. Allowable Subject Matter 5. The following is a statement of reasons for allowance: Pantanelli et al. (20111/0202512) teaches method to obtain a better understanding and/or translation of text by using semantic analysis and/or artificial intelligence and/or connotations and/or rating. Howcroft (2008/0201731) teaches system and method for single sign on targeted advertising. As to claims 1 and 22, prior arts of record fail to teach, or render obvious, alone or in combination a computer implemented method comprising the claimed components, relationships, and functionalities as specifically recited in the claims. 6. Claims 1-13, 22-27 would be allowable if rewritten or amended to over the rejection(s) under 35 U.S.C. 112 (pre-AIA ), second paragraph, and terminal disclaimer(s) filed to overcome the double patenting rejection(s), set forth in this Office action. Response to Arguments 7. Applicant’s arguments, filed 2/19/26, with respect to 35 U.S.C. 101 rejection(s) have been fully considered and are persuasive. The 35 U.S.C. 101 rejection(s) has been withdrawn. Claim 21 withdrawn. Please cancel claim 21. Examiner have reached out and left messages for Mr. Jordan Lewis since 3/2/26 to file electronic terminal disclaimer(s) and correct minor issues rejected in 35 U.S.C. 112 (pre-AIA ) second paragraph in order to advance the patent application but received no response. Conclusion 8. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. 9. Any inquiry concerning this communication or earlier communications from the examiner should be directed to QUYNH H NGUYEN whose telephone number is (571)272-7489. The examiner can normally be reached Monday-Thursday 7:30AM-5:30PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ahmad Matar can be reached on 571-272-7488. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /QUYNH H NGUYEN/Primary Examiner, Art Unit 2693
Read full office action

Prosecution Timeline

Feb 15, 2024
Application Filed
Nov 10, 2025
Non-Final Rejection — §112, §DP
Feb 19, 2026
Response Filed
Mar 20, 2026
Final Rejection — §112, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591740
METHODS AND SYSTEMS FOR GENERATING TEXTUAL FEATURES
2y 5m to grant Granted Mar 31, 2026
Patent 12567409
RESTRICTING THIRD PARTY APPLICATION ACCESS TO AUDIO DATA CONTENT
2y 5m to grant Granted Mar 03, 2026
Patent 12566920
System and Method to Generate and Enhance Dynamic Interactive Applications from Natural Language Using Artificial Intelligence
2y 5m to grant Granted Mar 03, 2026
Patent 12563141
SYSTEM AND METHOD OF CONNECTING A CALLER TO A RECIPIENT BASED ON THE RECIPIENT'S STATUS AND RELATIONSHIP TO THE CALLER
2y 5m to grant Granted Feb 24, 2026
Patent 12554761
DATA SOURCE CURATION FOR LARGE LANGUAGE MODEL (LLM) PROMPTS
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
87%
Grant Probability
99%
With Interview (+17.2%)
2y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 1078 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month