CalOptima Deploys Data Warehouse with Melissa’s Help

Author | Case Studies, Case Study, Data Quality, Data Quality Components for SSIS, Data Warehouse, Global Data Quality | , , , , , , ,

Find out how CalOptima successfully deployed a data warehouse with the help of Melissa’s easy-to-integrate Data Quality Components for SQL Server to turn such a tremendous undertaking into a simple task.

CalOptima, a county organized health system, serves more than 400,000 members, making it the second largest health insurer in Orange County, CA. CalOptima embarked on a mission to migrate its membership data into a single data warehouse in hopes of improving communications, better serving members and growing its business.

A main focal initiative of theirs was cleaning and verifying mailing and home address information to cut waste associated with undeliverable-as-addressed mail – luckily, they chose Melissa’s 30 years of contact data expertise to help in this mission.

Melissa Launches Melissa Developer, Full-Service Portal for Rapid Application Development

Melissa Team | Big Data, Data Quality Services, Data Steward, Data Warehouse, Developer Portal, Global Data Quality, Melissa Developer, News & Events, Press Release | , , , , , , , , , , ,

Comprehensive Tools and Resources Fuel Growth and Scalability, Enable Easy Application Integration of Data Cleansing and Enrichments

Rancho Santa Margarita, CALIF – April 25, 2018 – Melissa, a leading provider of global contact data quality and identity verification solutions, today announced Melissa Developer , a comprehensive portal with ready-to-use APIs (application program interfaces) that support the development community with easy integration of data quality operations and a full spectrum of data enrichments. Tools and resources are designed for rapid application development (RAD), enabling an agile, adaptive process that helps developers quickly solve business problems for their customers.

Resources are scalable, empowering developers to add capacity and features that ensure long-term application performance and profitability. Melissa Developer also offers flexible payment options using Melissa credits. Pricing scales with growth, ensuring funds remain available for core business operations.

“Melissa Developer reflects our commitment to global data quality across the broadest range of enterprise applications and platforms,” said Bud Walker, vice president, enterprise sales and strategy, Melissa. “By streamlining developer tools and resources into an easy-access portal, we are increasing focus on data quality performance and easing long-term application scalability.”

Developers can integrate location-based operations such as verifying and standardizing both U.S. and international addresses; validating geolocation of IP addresses with latitude/longitude for city, region, and country; and property and mortgage data enrichments for more than 140 million U.S. properties.

Business services include identity verification that pinpoints potential risks that a customer may represent. Tools also verify, standardize, update, and enrich contact data including global emails, phone numbers, and consumer and business names. Enrichments tap into Melissa’s multisource dataset of over 25 million U.S. business records including SIC/NAICS, sales volume, number of employees, and much more.

Melissa Developer’s flexible cloud APIs support REST, JSON, XML, and SOAP technologies. The portal can handle single record, real-time processing ideal for web forms and call center applications, as well as batch processing optimized for database cleanup.

Click here to visit Melissa Developer. To connect with members of Melissa’s global intelligence team, visit www.melissa.com or call 1-800-MELISSA.

Melissa Launches Druginator, Clinical Data Quality Powered by Machine Reasoning

Melissa Team | Big Data, Data Cleansing, Data Enrichment, Data Quality, Data Warehouse, Healthcare Data Management, News & Events, Press Release | , , , , , , , ,

Verify, Append, and Autocomplete Millions of Pharmaceutical Names, Variants and Dosages to Improve Clinical Data Integrity and Outcomes; Demos at Bio-IT World

Rancho Santa Margarita, CALIF – May 1, 2018 – Melissa, , a leading provider of global contact data quality and identity verification solutions, today announced Druginator, a new, tightly targeted element of its comprehensive data quality toolset to clean, harmonize, and connect disparate content sources for clinical insight and discovery. As part of the Melissa Informatics array of data quality solutions, datasets, and knowledge engineering resources, Druginator checks and validates millions of pharmaceutical drug names, variants, dosages, and spellings in real-time, against a comprehensive drug lexicon aligned with industry standards. The Melissa Informatics team will feature Druginator at the Bio-IT World Conference and Expo, #BioIT18, booth M634, May 15-17, 2018, at the Seaport World Trade Center in Boston.

Druginator provides a web-based UI for checking, verifying, and enriching drug names or lists of drugs, as well as web service APIs for drug data. Diverse, misspelled, and otherwise “dirty” drug information, whether from electronic medical records (EMRs) or from pharmaceutical dictionaries, studies, or public sources, is checked and reported with standardized “preferred terms” to become instantly usable for pharmaceutical and healthcare informatics. This provides an efficient resource to help researchers and clinicians check, verify, and normalize drug terms to reduce costs, increase accuracy, and improve research and patient outcomes.

Aligning with the industry goal to advance medicine through optimized data, Druginator harnesses the power of formal semantic technologies to apply machine reasoning to infer new concepts, linkages, and corrections to data about drugs. Future versions of Druginator will harmonize other types of mission-critical clinical data, such as diseases, genes, and proteins.

“Disconnected data is messy and not contextualized, slowing mission-critical goals such as FDA approval, time to market and understanding real-world use patterns for drugs. For example, a single medical records database was shown to contain nearly 200 different variations, spellings, and compounds of just one drug commonly used in the treatment of Parkinson’s disease. Discrepancies like this can have crucial implications on patient analyses, treatments, and treatment outcomes,” said Daniel Kha Le, vice president, Melissa Informatics.

Melissa Informatics’ drug, patient, disease, gene, protein, metabolic, and consumer-centered ontologies allow clinicians to easily refocus their research, tapping into different contextual insights based on the same data sets. With Druginator, drug names are checked and verified against a database of drug standard names, to meet FDA, UMLS and other terminology standards as required. Druginator’s append feature searches variations, alternate names, combination drugs, and available dosages to deliver comprehensive intelligence about your checked and verified drug. Autocompletion capabilities accelerate data entry, ensure standardization, and reduce clerical errors by offering an auto-populated list of variants matched in real-time.

Druginator is part of Melissa Informatics’ Sentient™ platform – semantic technologies that can be applied horizontally to accommodate the broad spectrum of pharmaceutical and clinical data harmonization and enrichment needs, integrating content across virtually any data format or terminology regardless of its original source.To connect with Melissa Informatics or members of Melissa’s global intelligence team, visit www.melissa.com or call 1-800-MELISSA.

Record Matching Made Easy with MatchUp Web Service

Blog Administrator | Data Governance, Data Integration, Data Management, Data Matching, Data Quality, Data Quality Components for SSIS, Data Steward, Data Warehouse, Duplicate Elimination, Fuzzy Matching, Golden Record, Householding, Identity Resolution, Record Linkage, SQL Server Integration Services, SSIS, Survivorship | , , , , , , ,

MatchUp®,
Melissa’s solution to identify and eliminate duplicate records, is now
available as a web service for batch processes, fulfilling one of most frequent
requests from our customers – accurate database matching without maintaining
and linking to libraries, or shelling out to the necessary locally-hosted data
files.

 

Now
you can integrate MatchUp into any aspect of your network that can communicate
with our secure servers using common protocols like XML, JSON, REST or SOAP.

 

Select
a predefined matching strategy, map the table input columns necessary to
identify matches to the respective request elements, and submit the records for
processing. Duplicate rows can be identified by a combination of NAME, ADDRESS,
COMPANY, PHONE and/or EMAIL.

 

Our
select list of matching strategies removes the complexity of configuring rules,
while still applying our fast and versatile fuzzy matching algorithms and
extensive datatype-specific knowledge base, ensuring the tough-to-identify
duplicates will be flagged by MatchUp. 

 

The output response returned by the service
can be used to update a database or create a unique marketing list by
evaluating each record’s result codes, group identifier and group count, and
using the record’s unique identifier to link back the original database record.

 

Since
Melissa’s servers do the processing, there are no key files – the temporary
sorting files – to manage, freeing up valuable hardware resources on your local
server.

 

Customers
can access the MatchUp
Web Service
license by obtaining a valid license from our sales team and
selecting the endpoint compatible to your development platform and necessary
request structures here.

Know Your Customer Compliance Boosted by Tech Innovations

Blog Administrator | Analyzing Data, Analyzing Data Quality, Big Data, Data Cleansing, Data Governance, Data Integration, Data Management, Data Matching, Data Migration, Data Profiling, Data Quality, Data Warehouse, FinTech, Identity Resolution | , , , , , ,

Fintech
breakthroughs that identify,validate and augment customer data help financial
services organizations head off fraud, operate within the rules

 

One
of the most pressing issues facing the financial services industry today is the
prevailing emphasis on security and fraud prevention. Big data and analytics
are leveraging the power of the Internet, but also offering big, juicy plums
for hackers, credit card fraudsters, money launderers and terrorists.

 

In
response, many countries including the U.S. have established Know Your Customer
(KYC) requirements, intended to guide financial institutions in heading off
fraudulent transactions. A key way for banks to do this is to clearly Identify legitimate
banking clients and their business relationships so the bad operators become
more obvious and identifiable.

 

But it
can be a challenge. Legitimate customers often have multiple banking
relationships with a single institution, with identifying information stored in
different formats in a multitude of databases. Various family members also may be
account owners, but have different names and live at different addresses,
possibly even in different countries.

 

Linking
all these threads together while at the same time correcting name misspellings,
standardizing mailing address formats, and parsing precise geolocation IDs can
be the stuff of banking compliance nightmares.

 

Not
doing this due diligence, however, can be catastrophic. To be hacked and have millions
syphoned out of your customers’ accounts is one thing, but to have the government
ready and willing to fine you for failing various KYC tests can be just as
damaging. Can you say “Loss of reputation and customers?”  

 

A WORLDWIDE PROBLEM

 

Most
notably non-compliant
of late have been banks based or doing business in
India, but it’s a worldwide problem. A New York-based broker-dealer recently
was
charged with KYC violations and for negligently allowing illegal trading by
one of its customers. Perhaps the highest profile case most recently was
Morgan Stanley Smith Barney running afoul of a variety of KYC violations. The main
charge: Morgan Stanley failed to properly identify an assortment of “red flags”
that signaled illegal activity.

 

Thankfully, technological breakthroughs increasingly
are offering their own solutions. Just as the digital world has enabled bad
players and victimized banks worldwide, technology is fighting back with
sophisticated Know Your Customer tools.

 

A big step forward is the ability to accurately verify
names and addresses. In a global world without borders,
technology that verifies, cleans, completes and standardizes names, addresses,
phone numbers and emails, and does so globally, immensely aids the process of
knowing one’s customers.

 

Technologies also exist that adds in missing contact
fields, finds census and area-specific statistical details, and provides precise
demographic information. When banks are able to combine census and
area-specific details with accurate names and addresses, they’ll know pretty
closely if a variant player is really a customer or a bad guy.

 

KYC guidelines are very specific about risky areas
prone to scams and schemes. A
wide variety of countries are identified by the U.S. State Department for
being prone to ignoring money laundering, tolerating suspicious transactions, and
generally lacking adequate know-your-customer requirements. Ignoring this, in
fact, was one of the issues that burned Morgan Stanley, permitting criminal
activities to continue unchecked.

 

While IP lookup that identifies exactly where a digital communication has
come from has been around for a while, new geocoding breakthroughs are able to 
convert IP addresses into precise latitude and
longitude coordinates around the world. Most European and international
identify cards also can be verified, along with mobile phone numbers and
driver’s license information.

 

One of the essential elements in all this is
updating customer data. It’s been estimated that accurate
contact information deteriorates severely and regularly, and good customers who merely
move across town can confuse inadequate screening processes and raise red flags
(false positives) when it shouldn’t. Today there are a variety of modules banks
can use to update customer information quickly and accurately.

 

The bottom line is this: Financial services
institutions now have lots of new reasons to love fintech technology that
mitigates KYC concerns by identifying legitimate customers, and flagging the ne’er-do-wells
before they can be effective.