Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of the Claims
Claims 1-16 are pending.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-16 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1
Claims 1-8 is directed to a series of steps, and therefore is a process.
Claims 9-16 are directed to a system with multiple components, and therefore is a machine
Independent Claims
Step 2A Prong One
The limitation of Claim 1 recites:
receiving a request from a first user to match with one or more other users, wherein said request comprises data related to the first user about one or more data points the first user is interested in that match data provided by the one or more other users;
generating one or more matches between first user and said one or more other users, wherein the one or matches are based at least in part on the data related to the first user and the data provided by the one or more other users;
receiving approval to match from the first user and a matched user selected from the one or more other users;
generating one or more icebreaker contents, wherein each one of the one or more icebreaker contents is based at least in part on matching data points between the data related to the first user and the data provided by the matched user; and
initiating a real-time connection between the first user and the matched user.
The limitation of Claim 9 recites:
receive a request from a first user to match with one or more other users, wherein said request comprises data related to the first user about one or more data points the first user is interested in that match data provided by the one or more other users;
generate one or more matches between first user and said one or more other users, wherein the one or matches are based at least in part on the data related to the first user and the data provided by the one or more other users;
receiving approval to match from the first user and a matched user selected from the one or more other users;
generate one or more icebreaker contents, wherein each one of the one or more icebreaker contents is based at least in part on matching data points between the data related to the first user and the data provided by the matched user; and
initiate a real-time connection between the first user and the matched user.
The claim limitations as drafted, recite a concept, that, under broadest reasonable interpretation, is a certain method of organizing human activity. The limitations are analogous to managing personal behavior or interactions between people (interactions between people), or a commercial or legal interaction (sales activity) such as generating matches between users. The generic computer implementations (see below) do not change the character of the limitations. Accordingly, the claims recite an abstract idea.
Step 2A Prong Two
The judicial exception is not integrated into a practical application. In particular, the claims recite the following additional elements:
Claim 1:
A computerized method of determining and providing matches while facilitating interaction between users in an online dating application
Claim 9:
A computerized system for determining and providing matches while facilitating interaction between users in an online dating application, the system comprising: one or more hardware processors configured by machine readable instructions to:
These additional elements are recited at a high-level of generality such that they amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use. Accordingly, the additional elements, when viewed individually and in combination, do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims do not amount to more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP 2106.05(h))
Therefore, the claims recite an abstract idea.
Step 2B
As discussed above with respect to Step 2A Prong Two, the additional elements, amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use. The same analysis applies here in 2B. The additional elements, when considered separately and in combination, do not add significantly more to the exception. They are generally linking the use of a judicial exception to a particular technological environment or field of use and cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. The claims are ineligible.
Dependent Claims
Dependent claims 2-8 and 10-16 further narrow the same abstract ideas recited in Claim 1 and claim 9, respectively. Therefore, claims 2-8 and 10-16 are directed to an abstract idea for the reasons given above.
No additional elements
There are no further additional elements recited in dependent claims (apart from those already recited and analyzed above in the independent claims) that change the character of the limitations. Therefore, the claims are directed to ineligible subject matter.
Claim Rejections - 35 USC § 101
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-16 are rejected under 35 U.S.C. 102(a)(1) as being unpatentable by Farrell (US2017/0006258A1).
Claim 1: Farrell teaches A computerized method of determining and providing matches while facilitating interaction between users in an online dating application, the method comprising:
receiving a request from a first user to match with one or more other users, wherein said request comprises data related to the first user about one or more data points the first user is interested in that match data provided by the one or more other users; (Farrell, Par. 0094)
Farrell, in par. 0094, teaches in the context of a social media application that enables users to video chat in a speed dating format, the user may initiate a request to be connected to another user of the desired gender (or an unspecified gender) within a predetermined proximity of the determined location of the user's user device (e.g., user device 204, 208 of FIG. 2). In some embodiments, the request may be initiated by the user using the I/O device 342 within the I/O unit 232 of the user device 204, 208. For example, the user may perform a gesture recognized by the I/O device 342 (and/or the gesture analysis unit 320), such as holding down one finger on a touchscreen for a predetermined amount of time, to initiate the request.
generating one or more matches between first user and said one or more other users, wherein the one or matches are based at least in part on the data related to the first user and the data provided by the one or more other users; (Farrell, par. 0096)
Farrell, in Par. 0096, teaches the communication unit 234 may utilize the profile management unit 310 (and/or the profile storage unit 332) and/or the location determination unit 314 to identify a plurality of second user devices associated with users of the desired gender (if specified) that are located within a predetermined proximity of the determined location of the user's user device. In some embodiments, the location determination unit 314 may first identify the plurality of second user devices. Next, the profile management unit 310 may access the profile storage unit 332 to identify which of the second user devices included in the plurality of second user devices identified by the location determination unit 314 are associated with second users of the gender desired by the user. The profile management unit 314 and/or the compatibility unit 322 may then filter a second user device from the plurality of second user devices to result in a filtered plurality of second user devices. As such, each second user device associated with a second user of a different gender than the gender desired by the user may be filtered (e.g., removed) from the plurality of second users so that each second user device included in the filtered plurality of second user devices is associated with a second user of the gender desired by the user. In other embodiments, gender filtering of second user devices may occur prior to determining locations of second user devices
receiving approval to match from the first user and a matched user selected from the one or more other users; (Farrell, Par. 0101)
Farrell, in Par. 0101, teaches the user may utilize the I/O device 342 (e.g., a camera and a microphone) included in the user device 204, 208 to capture a live video feed of the user's face and voice. Similarly, the second user may utilize the I/O device 342 (e.g., a camera and a microphone) included in the second user device to capture a live video feed of the second user's face and voice. In some embodiments, the live video feeds and/or the live audio feeds captured by the user device may be transmitted from the user device to the second user device for display to the second user, and vice versa.
generating one or more icebreaker contents, wherein each one of the one or more icebreaker contents is based at least in part on matching data points between the data related to the first user and the data provided by the matched user; and (Farrell, Par. 0100)
Farrell, Par. 0100, teaches the content management unit 312 may present each user with content that the two users may discuss during their video communication connection (if desired). For example, the provided content may serve as an ice breaker for conversation and may include a random fact, a joke, a quote, a news story, an image, a video clip, an audio clip, text, and/or the like. The content may be retrieved from the content storage unit 334 by the content management unit 312 and presented to each user using the I/O device 342 included in each of their respective user devices 204, 208. See also par. 0097 for compatibility unit and par. 0051 where subunits are communicatively coupled with each other
initiating a real-time connection between the first user and the matched user. (Farrell, Par. 0101)
Farrell, Par. 0101, teaches the live video feeds and/or the live audio feeds captured by the user device may be transmitted from the user device to the second user device for display to the second user, and vice versa.
Claim 2: Farrell teaches The computerized method of claim 1, further comprising the step of providing to the first user and matched user an option to extend duration of the real-time connection between the first user and the matched user. (Farrell, Par. 0107: users can terminate the video communication early (i.e. users have the option to extend duration of the real-time connection))
Claim 3: Farrell teaches The computerized method of claim 2, further comprising the step of extending the duration of the real-time connection between the first user and matched user based on input received from the first user and matched user. (Farrell, Par. 0107: users can terminate the video communication early (i.e. option to extend based on input))
Claim 4: Farrell teaches The computerized method of claim 1, further comprising the steps of:
analyzing audio data related to a conversation between the first user and matched user; and (Farrell, Par. 0103)
identifying at least one of the first user and matched user based on analysis of said audio data. (Farrell, Par. 0103)
Farrell, Par. 0103, the facial/vocal recognition unit 318 may analyze the live video feeds and/or the live audio feeds to determine that the live video feeds being transmitted between the users by way of the video communication connection include only each user's face. For example, the facial/vocal recognition unit 318 may employ various pixel comparison techniques described herein to identify facial features in the live video feeds of each user to determine whether the live video feeds are indeed appropriate. Additionally, the facial/vocal recognition unit 318 may analyze any captured audio of each user. Analysis of captured audio may include vocal recognition techniques so that the identity of each user may be confirmed.
Claim 5: Farrell teaches The computerized method of claim 4, further comprising the step of identifying more or more voice tones associated with the audio data. (Farrell, Par. 0103: Further, the facial/vocal recognition unit 318 may analyze captured audio of each user to identify keywords, changes in vocal pitch and/or vocal tone, and/or other objects of interest.)
Claim 6: Farrell teaches The computerized method of claim 5, further comprising the step of executing an action based on said one or more voice tones associated with the audio data. (Farrell, Par. 0104: If the facial/vocal recognition unit 318 determines any content of the live video feeds and/or the live audio feeds is inappropriate based on its analysis of the live video feeds and/or the live audio feeds (e.g., based on determining no facial features are present in the live video feeds and/or determining that inappropriate subject matter is present in the live video and/or audio feeds), then the communication unit 234 may terminate the video communication connection.)
Claim 7: Farrell teaches The computerized method of claim 6, wherein the action is selected from the group comprising terminating the real-time connection, extending the real-time connection, and providing audio or visual indicators to one or more of the first user and matched user. (Farrell, Par. 0104: If the facial/vocal recognition unit 318 determines any content of the live video feeds and/or the live audio feeds is inappropriate based on its analysis of the live video feeds and/or the live audio feeds (e.g., based on determining no facial features are present in the live video feeds and/or determining that inappropriate subject matter is present in the live video and/or audio feeds), then the communication unit 234 may terminate the video communication connection.)
Claim 8: Farrell teaches The computerized method of claim 1, further comprising the step of providing additional information to the first user and matched user based on approval by both first user and matched user to do so. (Farrell, Par. 0110: the user and/or the second user may be enabled via the I/O device 342 to select whether she or he desires to share contact information (e.g., information stored in the users' profiles) with the second user and/or the user, respectively. If both the user and the second user select to share contact information, perhaps based on consideration of a high compatibility score, the user and the second user may share contact information with each other, and the communication unit 234 may enable the user and the second user to communicate via a variety of communication channels as described herein.)
Claims 9-16: Claims 9-16 are directed to a system. Claim 9-16 recite limitations that are parallel in nature as those addressed above for claims 1-8 which directed towards a method. Claims 9-16 are therefore rejected for the same reasons as set forth above for claims 1-8, respectively. Furthermore, Farrell in par. 0050 and 0066 teach the computer elements recited in claim 9.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ISMAIL A MANEJWALA whose telephone number is (571)272-8904. The examiner can normally be reached M-F 8-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Resha Desai can be reached on 571-270-7792. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ISMAIL A MANEJWALA/Primary Examiner, Art Unit 3628